diff --git a/services/acm/src/main/resources/codegen-resources/service-2.json b/services/acm/src/main/resources/codegen-resources/service-2.json index 871a16c0ffb5..ca574ac0e60b 100644 --- a/services/acm/src/main/resources/codegen-resources/service-2.json +++ b/services/acm/src/main/resources/codegen-resources/service-2.json @@ -39,7 +39,7 @@ {"shape":"ResourceInUseException"}, {"shape":"InvalidArnException"} ], - "documentation":"

Deletes an ACM Certificate and its associated private key. If this action succeeds, the certificate no longer appears in the list of ACM Certificates that can be displayed by calling the ListCertificates action or be retrieved by calling the GetCertificate action. The certificate will not be available for use by other AWS services.

You cannot delete an ACM Certificate that is being used by another AWS service. To delete a certificate that is in use, the certificate association must first be removed.

" + "documentation":"

Deletes an ACM Certificate and its associated private key. If this action succeeds, the certificate no longer appears in the list of ACM Certificates that can be displayed by calling the ListCertificates action or be retrieved by calling the GetCertificate action. The certificate will not be available for use by other AWS services.

You cannot delete an ACM Certificate that is being used by another AWS service. To delete a certificate that is in use, the certificate association must first be removed.

" }, "DescribeCertificate":{ "name":"DescribeCertificate", @@ -68,7 +68,7 @@ {"shape":"RequestInProgressException"}, {"shape":"InvalidArnException"} ], - "documentation":"

Retrieves an ACM Certificate and certificate chain for the certificate specified by an ARN. The chain is an ordered list of certificates that contains the root certificate, intermediate certificates of subordinate CAs, and the ACM Certificate. The certificate and certificate chain are base64 encoded. If you want to decode the certificate chain to see the individual certificate fields, you can use OpenSSL.

Currently, ACM Certificates can be used only with Elastic Load Balancing and Amazon CloudFront.

" + "documentation":"

Retrieves an ACM Certificate and certificate chain for the certificate specified by an ARN. The chain is an ordered list of certificates that contains the ACM Certificate, intermediate certificates of subordinate CAs, and the root certificate in that order. The certificate and certificate chain are base64 encoded. If you want to decode the certificate chain to see the individual certificate fields, you can use OpenSSL.

" }, "ImportCertificate":{ "name":"ImportCertificate", @@ -82,7 +82,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Imports an SSL/TLS certificate into AWS Certificate Manager (ACM) to use with ACM's integrated AWS services.

ACM does not provide managed renewal for certificates that you import.

For more information about importing certificates into ACM, including the differences between certificates that you import and those that ACM provides, see Importing Certificates in the AWS Certificate Manager User Guide.

To import a certificate, you must provide the certificate and the matching private key. When the certificate is not self-signed, you must also provide a certificate chain. You can omit the certificate chain when importing a self-signed certificate.

The certificate, private key, and certificate chain must be PEM-encoded. For more information about converting these items to PEM format, see Importing Certificates Troubleshooting in the AWS Certificate Manager User Guide.

To import a new certificate, omit the CertificateArn field. Include this field only when you want to replace a previously imported certificate.

This operation returns the Amazon Resource Name (ARN) of the imported certificate.

" + "documentation":"

Imports an SSL/TLS certificate into AWS Certificate Manager (ACM) to use with ACM's integrated AWS services.

ACM does not provide managed renewal for certificates that you import.

For more information about importing certificates into ACM, including the differences between certificates that you import and those that ACM provides, see Importing Certificates in the AWS Certificate Manager User Guide.

To import a certificate, you must provide the certificate and the matching private key. When the certificate is not self-signed, you must also provide a certificate chain. You can omit the certificate chain when importing a self-signed certificate.

The certificate, private key, and certificate chain must be PEM-encoded. For more information about converting these items to PEM format, see Importing Certificates Troubleshooting in the AWS Certificate Manager User Guide.

To import a new certificate, omit the CertificateArn field. Include this field only when you want to replace a previously imported certificate.

When you import a certificate by using the CLI or one of the SDKs, you must specify the certificate, chain, and private key parameters as file names preceded by file://. For example, you can specify a certificate saved in the C:\\temp folder as C:\\temp\\certificate_to_import.pem. If you are making an HTTP or HTTPS Query request, include these parameters as BLOBs.

This operation returns the Amazon Resource Name (ARN) of the imported certificate.

" }, "ListCertificates":{ "name":"ListCertificates", @@ -134,7 +134,7 @@ {"shape":"LimitExceededException"}, {"shape":"InvalidDomainValidationOptionsException"} ], - "documentation":"

Requests an ACM Certificate for use with other AWS services. To request an ACM Certificate, you must specify the fully qualified domain name (FQDN) for your site. You can also specify additional FQDNs if users can reach your site by using other names. For each domain name you specify, email is sent to the domain owner to request approval to issue the certificate. After receiving approval from the domain owner, the ACM Certificate is issued. For more information, see the AWS Certificate Manager User Guide.

" + "documentation":"

Requests an ACM Certificate for use with other AWS services. To request an ACM Certificate, you must specify the fully qualified domain name (FQDN) for your site in the DomainName parameter. You can also specify additional FQDNs in the SubjectAlternativeNames parameter if users can reach your site by using other names.

For each domain name you specify, email is sent to the domain owner to request approval to issue the certificate. Email is sent to three registered contact addresses in the WHOIS database and to five common system administration addresses formed from the DomainName you enter or the optional ValidationDomain parameter. For more information, see Validate Domain Ownership.

After receiving approval from the domain owner, the ACM Certificate is issued. For more information, see the AWS Certificate Manager User Guide.

" }, "ResendValidationEmail":{ "name":"ResendValidationEmail", diff --git a/services/api-gateway/src/main/resources/codegen-resources/service-2.json b/services/api-gateway/src/main/resources/codegen-resources/service-2.json index 7f87fcefb766..021401debd39 100644 --- a/services/api-gateway/src/main/resources/codegen-resources/service-2.json +++ b/services/api-gateway/src/main/resources/codegen-resources/service-2.json @@ -309,6 +309,8 @@ "errors":[ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, + {"shape":"ConflictException"}, + {"shape":"BadRequestException"}, {"shape":"TooManyRequestsException"} ], "documentation":"

Deletes the BasePathMapping resource.

" @@ -392,6 +394,23 @@ ], "documentation":"

Deletes the DomainName resource.

" }, + "DeleteGatewayResponse":{ + "name":"DeleteGatewayResponse", + "http":{ + "method":"DELETE", + "requestUri":"/restapis/{restapi_id}/gatewayresponses/{response_type}", + "responseCode":202 + }, + "input":{"shape":"DeleteGatewayResponseRequest"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"NotFoundException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"BadRequestException"}, + {"shape":"ConflictException"} + ], + "documentation":"

Clears any customization of a GatewayResponse of a specified response type on the given RestApi and resets it with the default settings.

" + }, "DeleteIntegration":{ "name":"DeleteIntegration", "http":{ @@ -892,10 +911,42 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, {"shape":"BadRequestException"}, + {"shape":"ConflictException"}, {"shape":"TooManyRequestsException"} ], "documentation":"

Exports a deployed version of a RestApi in a specified format.

" }, + "GetGatewayResponse":{ + "name":"GetGatewayResponse", + "http":{ + "method":"GET", + "requestUri":"/restapis/{restapi_id}/gatewayresponses/{response_type}" + }, + "input":{"shape":"GetGatewayResponseRequest"}, + "output":{"shape":"GatewayResponse"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"NotFoundException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Gets a GatewayResponse of a specified response type on the given RestApi.

" + }, + "GetGatewayResponses":{ + "name":"GetGatewayResponses", + "http":{ + "method":"GET", + "requestUri":"/restapis/{restapi_id}/gatewayresponses" + }, + "input":{"shape":"GetGatewayResponsesRequest"}, + "output":{"shape":"GatewayResponses"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"UnauthorizedException"}, + {"shape":"NotFoundException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Gets the GatewayResponses collection on the given RestApi. If an API developer has not added any definitions for gateway responses, the result will be the Amazon API Gateway-generated default GatewayResponses collection for the supported response types.

" + }, "GetIntegration":{ "name":"GetIntegration", "http":{ @@ -1108,6 +1159,7 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, {"shape":"BadRequestException"}, + {"shape":"ConflictException"}, {"shape":"TooManyRequestsException"} ], "documentation":"

Generates a client SDK for a RestApi and Stage.

" @@ -1304,6 +1356,24 @@ ], "documentation":"

A feature of the Amazon API Gateway control service for creating a new API from an external API definition file.

" }, + "PutGatewayResponse":{ + "name":"PutGatewayResponse", + "http":{ + "method":"PUT", + "requestUri":"/restapis/{restapi_id}/gatewayresponses/{response_type}", + "responseCode":201 + }, + "input":{"shape":"PutGatewayResponseRequest"}, + "output":{"shape":"GatewayResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"UnauthorizedException"}, + {"shape":"NotFoundException"}, + {"shape":"LimitExceededException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Creates a customization of a GatewayResponse of a specified response type and status code on the given RestApi.

" + }, "PutIntegration":{ "name":"PutIntegration", "http":{ @@ -1320,7 +1390,7 @@ {"shape":"NotFoundException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Represents a put integration.

" + "documentation":"

Sets up a method's integration.

" }, "PutIntegrationResponse":{ "name":"PutIntegrationResponse", @@ -1578,6 +1648,22 @@ ], "documentation":"

Changes information about the DomainName resource.

" }, + "UpdateGatewayResponse":{ + "name":"UpdateGatewayResponse", + "http":{ + "method":"PATCH", + "requestUri":"/restapis/{restapi_id}/gatewayresponses/{response_type}" + }, + "input":{"shape":"UpdateGatewayResponseRequest"}, + "output":{"shape":"GatewayResponse"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"NotFoundException"}, + {"shape":"BadRequestException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Updates a GatewayResponse of a specified response type on the given RestApi.

" + }, "UpdateIntegration":{ "name":"UpdateIntegration", "http":{ @@ -1746,7 +1832,7 @@ {"shape":"BadRequestException"}, {"shape":"NotFoundException"} ], - "documentation":"

Grants a temporary extension to the reamining quota of a usage plan associated with a specified API key.

" + "documentation":"

Grants a temporary extension to the remaining quota of a usage plan associated with a specified API key.

" }, "UpdateUsagePlan":{ "name":"UpdateUsagePlan", @@ -1855,7 +1941,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfApiKey", - "documentation":"

The current page of any ApiKey resources in the collection of ApiKey resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -1892,44 +1978,45 @@ }, "type":{ "shape":"AuthorizerType", - "documentation":"

[Required] The type of the authorizer. Currently, the valid type is TOKEN for a Lambda function or COGNITO_USER_POOLS for an Amazon Cognito user pool.

" + "documentation":"

[Required] The authorizer type. Valid values are TOKEN for a Lambda function using a single authorization token submitted in a custom header, REQUEST for a Lambda function using incoming request parameters, and COGNITO_USER_POOLS for using an Amazon Cognito user pool.

" }, "providerARNs":{ "shape":"ListOfARNs", - "documentation":"

A list of the provider ARNs of the authorizer. For an TOKEN authorizer, this is not defined. For authorizers of the COGNITO_USER_POOLS type, each element corresponds to a user pool ARN of this format: arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id}.

" + "documentation":"

A list of the Amazon Cognito user pool ARNs for the COGNITO_USER_POOLS authorizer. Each element is of this format: arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id}. For a TOKEN or REQUEST authorizer, this is not defined.

" }, "authType":{ "shape":"String", - "documentation":"

Optional customer-defined field, used in Swagger imports/exports. Has no functional impact.

" + "documentation":"

Optional customer-defined field, used in Swagger imports and exports without functional impact.

" }, "authorizerUri":{ "shape":"String", - "documentation":"

[Required] Specifies the authorizer's Uniform Resource Identifier (URI). For TOKEN authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:{account_id}:function:{lambda_function_name}/invocations. In general, the URI has this form arn:aws:apigateway:{region}:lambda:path/{service_api}, where {region} is the same as the region hosting the Lambda function, path indicates that the remaining substring in the URI should be treated as the path to the resource, including the initial /. For Lambda functions, this is usually of the form /2015-03-31/functions/[FunctionARN]/invocations.

" + "documentation":"

Specifies the authorizer's Uniform Resource Identifier (URI). For TOKEN or REQUEST authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:{account_id}:function:{lambda_function_name}/invocations. In general, the URI has this form arn:aws:apigateway:{region}:lambda:path/{service_api}, where {region} is the same as the region hosting the Lambda function, path indicates that the remaining substring in the URI should be treated as the path to the resource, including the initial /. For Lambda functions, this is usually of the form /2015-03-31/functions/[FunctionARN]/invocations.

" }, "authorizerCredentials":{ "shape":"String", - "documentation":"

Specifies the credentials required for the authorizer, if any. Two options are available. To specify an IAM role for Amazon API Gateway to assume, use the role's Amazon Resource Name (ARN). To use resource-based permissions on the Lambda function, specify null.

" + "documentation":"

Specifies the required credentials as an IAM role for Amazon API Gateway to invoke the authorizer. To specify an IAM role for Amazon API Gateway to assume, use the role's Amazon Resource Name (ARN). To use resource-based permissions on the Lambda function, specify null.

" }, "identitySource":{ "shape":"String", - "documentation":"

[Required] The source of the identity in an incoming request. For a TOKEN authorizer, this value is a mapping expression with the same syntax as integration parameter mappings. The only valid source for tokens is 'header', so the expression should match 'method.request.header.[headerName]'. The value of the header '[headerName]' will be interpreted as the incoming token. For COGNITO_USER_POOLS authorizers, this property is used.

" + "documentation":"

The identity source for which authorization is requested.

" }, "identityValidationExpression":{ "shape":"String", - "documentation":"

A validation expression for the incoming identity. For TOKEN authorizers, this value should be a regular expression. The incoming token from the client is matched against this expression, and will proceed if the token matches. If the token doesn't match, the client receives a 401 Unauthorized response.

" + "documentation":"

A validation expression for the incoming identity token. For TOKEN authorizers, this value is a regular expression. Amazon API Gateway will match the incoming token from the client against the specified regular expression. It will invoke the authorizer's Lambda function there is a match. Otherwise, it will return a 401 Unauthorized response without calling the Lambda function. The validation expression does not apply to the REQUEST authorizer.

" }, "authorizerResultTtlInSeconds":{ "shape":"NullableInteger", - "documentation":"

The TTL in seconds of cached authorizer results. If greater than 0, API Gateway will cache authorizer responses. If this field is not set, the default value is 300. The maximum value is 3600, or 1 hour.

" + "documentation":"

The TTL in seconds of cached authorizer results. If it equals 0, authorization caching is disabled. If it is greater than 0, API Gateway will cache authorizer responses. If this field is not set, the default value is 300. The maximum value is 3600, or 1 hour.

" } }, "documentation":"

Represents an authorization layer for methods. If enabled on a method, API Gateway will activate the authorizer when a client calls the method.

Enable custom authorization
" }, "AuthorizerType":{ "type":"string", - "documentation":"

The authorizer type. the current value is TOKEN for a Lambda function or COGNITO_USER_POOLS for an Amazon Cognito Your User Pool.

", + "documentation":"

[Required] The authorizer type. Valid values are TOKEN for a Lambda function using a single authorization token submitted in a custom header, REQUEST for a Lambda function using incoming request parameters, and COGNITO_USER_POOLS for using an Amazon Cognito user pool.

", "enum":[ "TOKEN", + "REQUEST", "COGNITO_USER_POOLS" ] }, @@ -1939,7 +2026,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfAuthorizer", - "documentation":"

Gets the current list of Authorizer resources in the collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -1950,6 +2037,7 @@ "members":{ "message":{"shape":"String"} }, + "documentation":"

The submitted request is not valid, for example, the input is incomplete or incorrect. See the accompanying error message for details.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -1962,11 +2050,11 @@ }, "restApiId":{ "shape":"String", - "documentation":"

The name of the API.

" + "documentation":"

The string identifier of the associated RestApi.

" }, "stage":{ "shape":"String", - "documentation":"

The name of the API's stage.

" + "documentation":"

The name of the associated stage.

" } }, "documentation":"

Represents the base path that callers of the API must provide as part of the URL after the domain name.

A custom domain name plus a BasePathMapping specification identifies a deployed RestApi in a given stage of the owner Account.
Use Custom Domain Names
" @@ -1977,7 +2065,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfBasePathMapping", - "documentation":"

The current page of any BasePathMapping resources in the collection of base path mapping resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -2034,7 +2122,7 @@ "documentation":"

The timestamp when the client certificate will expire.

" } }, - "documentation":"

Represents a client certificate used to configure client-side SSL authentication while sending requests to the integration endpoint.

Client certificates are used authenticate an API by the back-end server. To authenticate an API client (or user), use a custom Authorizer.
Use Client-Side Certificate
" + "documentation":"

Represents a client certificate used to configure client-side SSL authentication while sending requests to the integration endpoint.

Client certificates are used to authenticate an API by the backend server. To authenticate an API client (or user), use IAM roles and policies, a custom Authorizer or an Amazon Cognito user pool.
Use Client-Side Certificate
" }, "ClientCertificates":{ "type":"structure", @@ -2042,7 +2130,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfClientCertificate", - "documentation":"

The current page of any ClientCertificate resources in the collection of ClientCertificate resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -2053,6 +2141,7 @@ "members":{ "message":{"shape":"String"} }, + "documentation":"

The request configuration has conflicts. For details, see the accompanying error message.

", "error":{"httpStatusCode":409}, "exception":true }, @@ -2102,13 +2191,12 @@ "required":[ "restApiId", "name", - "type", - "identitySource" + "type" ], "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier under which the Authorizer will be created.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2118,35 +2206,35 @@ }, "type":{ "shape":"AuthorizerType", - "documentation":"

[Required] The type of the authorizer.

" + "documentation":"

[Required] The authorizer type. Valid values are TOKEN for a Lambda function using a single authorization token submitted in a custom header, REQUEST for a Lambda function using incoming request parameters, and COGNITO_USER_POOLS for using an Amazon Cognito user pool.

" }, "providerARNs":{ "shape":"ListOfARNs", - "documentation":"

A list of the Cognito Your User Pool authorizer's provider ARNs.

" + "documentation":"

A list of the Amazon Cognito user pool ARNs for the COGNITO_USER_POOLS authorizer. Each element is of this format: arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id}. For a TOKEN or REQUEST authorizer, this is not defined.

" }, "authType":{ "shape":"String", - "documentation":"

Optional customer-defined field, used in Swagger imports/exports. Has no functional impact.

" + "documentation":"

Optional customer-defined field, used in Swagger imports and exports without functional impact.

" }, "authorizerUri":{ "shape":"String", - "documentation":"

[Required] Specifies the authorizer's Uniform Resource Identifier (URI).

" + "documentation":"

Specifies the authorizer's Uniform Resource Identifier (URI). For TOKEN or REQUEST authorizers, this must be a well-formed Lambda function URI, for example, arn:aws:apigateway:us-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-west-2:{account_id}:function:{lambda_function_name}/invocations. In general, the URI has this form arn:aws:apigateway:{region}:lambda:path/{service_api}, where {region} is the same as the region hosting the Lambda function, path indicates that the remaining substring in the URI should be treated as the path to the resource, including the initial /. For Lambda functions, this is usually of the form /2015-03-31/functions/[FunctionARN]/invocations.

" }, "authorizerCredentials":{ "shape":"String", - "documentation":"

Specifies the credentials required for the authorizer, if any.

" + "documentation":"

Specifies the required credentials as an IAM role for Amazon API Gateway to invoke the authorizer. To specify an IAM role for Amazon API Gateway to assume, use the role's Amazon Resource Name (ARN). To use resource-based permissions on the Lambda function, specify null.

" }, "identitySource":{ "shape":"String", - "documentation":"

[Required] The source of the identity in an incoming request.

" + "documentation":"

The identity source for which authorization is requested.

" }, "identityValidationExpression":{ "shape":"String", - "documentation":"

A validation expression for the incoming identity.

" + "documentation":"

A validation expression for the incoming identity token. For TOKEN authorizers, this value is a regular expression. Amazon API Gateway will match the incoming token from the client against the specified regular expression. It will invoke the authorizer's Lambda function there is a match. Otherwise, it will return a 401 Unauthorized response without calling the Lambda function. The validation expression does not apply to the REQUEST authorizer.

" }, "authorizerResultTtlInSeconds":{ "shape":"NullableInteger", - "documentation":"

The TTL of cached authorizer results.

" + "documentation":"

The TTL in seconds of cached authorizer results. If it equals 0, authorization caching is disabled. If it is greater than 0, API Gateway will cache authorizer responses. If this field is not set, the default value is 300. The maximum value is 3600, or 1 hour.

" } }, "documentation":"

Request to add a new Authorizer to an existing RestApi resource.

" @@ -2170,7 +2258,7 @@ }, "restApiId":{ "shape":"String", - "documentation":"

The name of the API that you want to apply this mapping to.

" + "documentation":"

The string identifier of the associated RestApi.

" }, "stage":{ "shape":"String", @@ -2185,7 +2273,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi resource identifier for the Deployment resource to create.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2226,7 +2314,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-created documentation part.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2250,7 +2338,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] Specifies the API identifier of the to-be-created documentation version.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2279,23 +2367,35 @@ }, "certificateName":{ "shape":"String", - "documentation":"

The user-friendly name of the certificate.

" + "documentation":"

The user-friendly name of the certificate that will be used by edge-optimized endpoint for this domain name.

" }, "certificateBody":{ "shape":"String", - "documentation":"

[Deprecated] The body of the server certificate provided by your certificate authority.

" + "documentation":"

[Deprecated] The body of the server certificate that will be used by edge-optimized endpoint for this domain name provided by your certificate authority.

" }, "certificatePrivateKey":{ "shape":"String", - "documentation":"

[Deprecated] Your certificate's private key.

" + "documentation":"

[Deprecated] Your edge-optimized endpoint's domain name certificate's private key.

" }, "certificateChain":{ "shape":"String", - "documentation":"

[Deprecated] The intermediate certificates and optionally the root certificate, one after the other without any blank lines. If you include the root certificate, your certificate chain must start with intermediate certificates and end with the root certificate. Use the intermediate certificates that were provided by your certificate authority. Do not include any intermediaries that are not in the chain of trust path.

" + "documentation":"

[Deprecated] The intermediate certificates and optionally the root certificate, one after the other without any blank lines, used by an edge-optimized endpoint for this domain name. If you include the root certificate, your certificate chain must start with intermediate certificates and end with the root certificate. Use the intermediate certificates that were provided by your certificate authority. Do not include any intermediaries that are not in the chain of trust path.

" }, "certificateArn":{ "shape":"String", - "documentation":"

The reference to an AWS-managed certificate. AWS Certificate Manager is the only supported source.

" + "documentation":"

The reference to an AWS-managed certificate that will be used by edge-optimized endpoint for this domain name. AWS Certificate Manager is the only supported source.

" + }, + "regionalCertificateName":{ + "shape":"String", + "documentation":"

The user-friendly name of the certificate that will be used by regional endpoint for this domain name.

" + }, + "regionalCertificateArn":{ + "shape":"String", + "documentation":"

The reference to an AWS-managed certificate that will be used by regional endpoint for this domain name. AWS Certificate Manager is the only supported source.

" + }, + "endpointConfiguration":{ + "shape":"EndpointConfiguration", + "documentation":"

The endpoint configuration of this DomainName showing the endpoint types of the domain name.

" } }, "documentation":"

A request to create a new domain name.

" @@ -2316,7 +2416,7 @@ }, "name":{ "shape":"String", - "documentation":"

The name of the model.

" + "documentation":"

The name of the model. Must be alphanumeric.

" }, "description":{ "shape":"String", @@ -2339,7 +2439,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the RestApi for which the RequestValidator is created.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2368,7 +2468,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi for the resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2408,6 +2508,10 @@ "binaryMediaTypes":{ "shape":"ListOfString", "documentation":"

The list of binary media types supported by the RestApi. By default, the RestApi supports only UTF-8-encoded text payloads.

" + }, + "endpointConfiguration":{ + "shape":"EndpointConfiguration", + "documentation":"

The endpoint configuration of this RestApi showing the endpoint types of the API.

" } }, "documentation":"

The POST Request to add a new RestApi resource to your collection.

" @@ -2422,7 +2526,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Stage resource to create.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2531,7 +2635,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Authorizer resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2588,7 +2692,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Deployment resource to delete.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2610,7 +2714,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] Specifies the identifier of an API of the to-be-deleted documentation part.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2632,7 +2736,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of a to-be-deleted documentation snapshot.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2658,6 +2762,28 @@ }, "documentation":"

A request to delete the DomainName resource.

" }, + "DeleteGatewayResponseRequest":{ + "type":"structure", + "required":[ + "restApiId", + "responseType" + ], + "members":{ + "restApiId":{ + "shape":"String", + "documentation":"

The string identifier of the associated RestApi.

", + "location":"uri", + "locationName":"restapi_id" + }, + "responseType":{ + "shape":"GatewayResponseType", + "documentation":"

The response type of the associated GatewayResponse. Valid values are

", + "location":"uri", + "locationName":"response_type" + } + }, + "documentation":"

Clears any customization of a GatewayResponse of a specified response type on the given RestApi and resets it with the default settings.

" + }, "DeleteIntegrationRequest":{ "type":"structure", "required":[ @@ -2668,7 +2794,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a delete integration request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2698,7 +2824,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a delete integration response request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2733,7 +2859,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Method resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2763,7 +2889,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the MethodResponse resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2797,7 +2923,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi under which the model will be deleted.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2819,7 +2945,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the RestApi from which the given RequestValidator is deleted.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2841,7 +2967,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Resource resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2860,7 +2986,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The ID of the RestApi you want to delete.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" } @@ -2876,7 +3002,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Stage resource to delete.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -2922,7 +3048,7 @@ "locationName":"usageplanId" } }, - "documentation":"

The DELETE request to delete a uasge plan of a given plan Id.

" + "documentation":"

The DELETE request to delete a usage plan of a given plan Id.

" }, "Deployment":{ "type":"structure", @@ -2952,7 +3078,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfDeployment", - "documentation":"

The current page of any Deployment resources in the collection of deployment resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -2996,7 +3122,7 @@ "members":{ "type":{ "shape":"DocumentationPartType", - "documentation":"

The type of API entity to which the documentation content applies. It is a valid and required field for API entity types of API, AUTHORIZER, MODEL, RESOURCE, METHOD, PATH_PARAMETER, QUERY_PARAMETER, REQUEST_HEADER, REQUEST_BODY, RESPONSE, RESPONSE_HEADER, and RESPONSE_BODY. Content inheritance does not apply to any entity of the API, AUTHROZER, METHOD, MODEL, REQUEST_BODY, or RESOURCE type.

" + "documentation":"

The type of API entity to which the documentation content applies. It is a valid and required field for API entity types of API, AUTHORIZER, MODEL, RESOURCE, METHOD, PATH_PARAMETER, QUERY_PARAMETER, REQUEST_HEADER, REQUEST_BODY, RESPONSE, RESPONSE_HEADER, and RESPONSE_BODY. Content inheritance does not apply to any entity of the API, AUTHORIZER, METHOD, MODEL, REQUEST_BODY, or RESOURCE type.

" }, "path":{ "shape":"String", @@ -3044,7 +3170,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfDocumentationPart", - "documentation":"

The current page of DocumentationPart resources in the DocumentationParts collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -3074,7 +3200,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfDocumentationVersion", - "documentation":"

The current page of DocumentationVersion items from the DocumentationVersions collection of an API.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -3089,22 +3215,46 @@ }, "certificateName":{ "shape":"String", - "documentation":"

The name of the certificate.

" + "documentation":"

The name of the certificate that will be used by edge-optimized endpoint for this domain name.

" }, "certificateArn":{ "shape":"String", - "documentation":"

The reference to an AWS-managed certificate. AWS Certificate Manager is the only supported source.

" + "documentation":"

The reference to an AWS-managed certificate that will be used by edge-optimized endpoint for this domain name. AWS Certificate Manager is the only supported source.

" }, "certificateUploadDate":{ "shape":"Timestamp", - "documentation":"

The timestamp when the certificate was uploaded.

" + "documentation":"

The timestamp when the certificate that was used by edge-optimized endpoint for this domain name was uploaded.

" + }, + "regionalDomainName":{ + "shape":"String", + "documentation":"

The domain name associated with the regional endpoint for this custom domain name. You set up this association by adding a DNS record that points the custom domain name to this regional domain name. The regional domain name is returned by Amazon API Gateway when you create a regional endpoint.

" + }, + "regionalHostedZoneId":{ + "shape":"String", + "documentation":"

The region-specific Amazon Route 53 Hosted Zone ID of the regional endpoint. For more information, see Set up a Regional Custom Domain Name and AWS Regions and Endpoints for API Gateway.

" + }, + "regionalCertificateName":{ + "shape":"String", + "documentation":"

The name of the certificate that will be used for validating the regional domain name.

" + }, + "regionalCertificateArn":{ + "shape":"String", + "documentation":"

The reference to an AWS-managed certificate that will be used for validating the regional domain name. AWS Certificate Manager is the only supported source.

" }, "distributionDomainName":{ "shape":"String", - "documentation":"

The domain name of the Amazon CloudFront distribution. For more information, see the Amazon CloudFront documentation.

" + "documentation":"

The domain name of the Amazon CloudFront distribution associated with this custom domain name for an edge-optimized endpoint. You set up this association when adding a DNS record pointing the custom domain name to this distribution name. For more information about CloudFront distributions, see the Amazon CloudFront documentation.

" + }, + "distributionHostedZoneId":{ + "shape":"String", + "documentation":"

The region-agnostic Amazon Route 53 Hosted Zone ID of the edge-optimized endpoint. The valid value is Z2FDTNDATAQYW2 for all the regions. For more information, see Set up a Regional Custom Domain Name and AWS Regions and Endpoints for API Gateway.

" + }, + "endpointConfiguration":{ + "shape":"EndpointConfiguration", + "documentation":"

The endpoint configuration of this DomainName showing the endpoint types of the domain name.

" } }, - "documentation":"

Represents a domain name that is contained in a simpler, more intuitive URL that can be called.

Use Client-Side Certificate
" + "documentation":"

Represents a custom domain name as a user-friendly host name of an API (RestApi).

When you deploy an API, Amazon API Gateway creates a default host name for the API. This default API host name is of the {restapi-id}.execute-api.{region}.amazonaws.com format. With the default host name, you can access the API's root resource with the URL of https://{restapi-id}.execute-api.{region}.amazonaws.com/{stage}/. When you set up a custom domain name of apis.example.com for this API, you can then access the same resource using the URL of the https://apis.examples.com/myApi, where myApi is the base path mapping (BasePathMapping) of your API under the custom domain name.

Set a Custom Host Name for an API
" }, "DomainNames":{ "type":"structure", @@ -3112,13 +3262,31 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfDomainName", - "documentation":"

The current page of any DomainName resources in the collection of DomainName resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, "documentation":"

Represents a collection of DomainName resources.

Use Client-Side Certificate
" }, "Double":{"type":"double"}, + "EndpointConfiguration":{ + "type":"structure", + "members":{ + "types":{ + "shape":"ListOfEndpointType", + "documentation":"

A list of endpoint types of an API (RestApi) or its custom domain name (DomainName). For an edge-optimized API and its custom domain name, the endpoint type is \"EDGE\". For a regional API and its custom domain name, the endpoint type is REGIONAL.

" + } + }, + "documentation":"

The endpoint configuration to indicate the types of endpoints an API (RestApi) or its custom domain name (DomainName) has.

" + }, + "EndpointType":{ + "type":"string", + "documentation":"

The endpoint type. The valid value is EDGE for edge-optimized API setup, most suitable for mobile applications, REGIONAL for regional API endpoint setup, most suitable for calling from AWS Region

", + "enum":[ + "REGIONAL", + "EDGE" + ] + }, "ExportResponse":{ "type":"structure", "members":{ @@ -3151,7 +3319,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The API identifier of the stage to flush.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3173,7 +3341,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The API identifier of the stage to flush its cache.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3186,6 +3354,69 @@ }, "documentation":"

Requests Amazon API Gateway to flush a stage's cache.

" }, + "GatewayResponse":{ + "type":"structure", + "members":{ + "responseType":{ + "shape":"GatewayResponseType", + "documentation":"

The response type of the associated GatewayResponse. Valid values are

" + }, + "statusCode":{ + "shape":"StatusCode", + "documentation":"

The HTTP status code for this GatewayResponse.

" + }, + "responseParameters":{ + "shape":"MapOfStringToString", + "documentation":"

Response parameters (paths, query strings and headers) of the GatewayResponse as a string-to-string map of key-value pairs.

" + }, + "responseTemplates":{ + "shape":"MapOfStringToString", + "documentation":"

Response templates of the GatewayResponse as a string-to-string map of key-value pairs.

" + }, + "defaultResponse":{ + "shape":"Boolean", + "documentation":"

A Boolean flag to indicate whether this GatewayResponse is the default gateway response (true) or not (false). A default gateway response is one generated by Amazon API Gateway without any customization by an API developer.

" + } + }, + "documentation":"

A gateway response of a given response type and status code, with optional response parameters and mapping templates.

For more information about valid gateway response types, see Gateway Response Types Supported by Amazon API Gateway

Example: Get a Gateway Response of a given response type

Request

This example shows how to get a gateway response of the MISSING_AUTHNETICATION_TOKEN type.

GET /restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN HTTP/1.1 Host: beta-apigateway.us-east-1.amazonaws.com Content-Type: application/json X-Amz-Date: 20170503T202516Z Authorization: AWS4-HMAC-SHA256 Credential={access-key-id}/20170503/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=1b52460e3159c1a26cff29093855d50ea141c1c5b937528fecaf60f51129697a Cache-Control: no-cache Postman-Token: 3b2a1ce9-c848-2e26-2e2f-9c2caefbed45 

The response type is specified as a URL path.

Response

The successful operation returns the 200 OK status code and a payload similar to the following:

{ \"_links\": { \"curies\": { \"href\": \"http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-gatewayresponse-{rel}.html\", \"name\": \"gatewayresponse\", \"templated\": true }, \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" }, \"gatewayresponse:delete\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" } }, \"defaultResponse\": false, \"responseParameters\": { \"gatewayresponse.header.x-request-path\": \"method.request.path.petId\", \"gatewayresponse.header.Access-Control-Allow-Origin\": \"'a.b.c'\", \"gatewayresponse.header.x-request-query\": \"method.request.querystring.q\", \"gatewayresponse.header.x-request-header\": \"method.request.header.Accept\" }, \"responseTemplates\": { \"application/json\": \"{\\n \\\"message\\\": $context.error.messageString,\\n \\\"type\\\": \\\"$context.error.responseType\\\",\\n \\\"stage\\\": \\\"$context.stage\\\",\\n \\\"resourcePath\\\": \\\"$context.resourcePath\\\",\\n \\\"stageVariables.a\\\": \\\"$stageVariables.a\\\",\\n \\\"statusCode\\\": \\\"'404'\\\"\\n}\" }, \"responseType\": \"MISSING_AUTHENTICATION_TOKEN\", \"statusCode\": \"404\" }

Customize Gateway Responses
" + }, + "GatewayResponseType":{ + "type":"string", + "enum":[ + "DEFAULT_4XX", + "DEFAULT_5XX", + "RESOURCE_NOT_FOUND", + "UNAUTHORIZED", + "INVALID_API_KEY", + "ACCESS_DENIED", + "AUTHORIZER_FAILURE", + "AUTHORIZER_CONFIGURATION_ERROR", + "INVALID_SIGNATURE", + "EXPIRED_TOKEN", + "MISSING_AUTHENTICATION_TOKEN", + "INTEGRATION_FAILURE", + "INTEGRATION_TIMEOUT", + "API_CONFIGURATION_ERROR", + "UNSUPPORTED_MEDIA_TYPE", + "BAD_REQUEST_PARAMETERS", + "BAD_REQUEST_BODY", + "REQUEST_TOO_LARGE", + "THROTTLED", + "QUOTA_EXCEEDED" + ] + }, + "GatewayResponses":{ + "type":"structure", + "members":{ + "position":{"shape":"String"}, + "items":{ + "shape":"ListOfGatewayResponse", + "documentation":"

Returns the entire collection, because of no pagination support.

", + "locationName":"item" + } + }, + "documentation":"

The collection of the GatewayResponse instances of a RestApi as a responseType-to-GatewayResponse object map of key-value pairs. As such, pagination is not supported for querying this collection.

For more information about valid gateway response types, see Gateway Response Types Supported by Amazon API Gateway

Example: Get the collection of gateway responses of an API

Request

This example request shows how to retrieve the GatewayResponses collection from an API.

GET /restapis/o81lxisefl/gatewayresponses HTTP/1.1 Host: beta-apigateway.us-east-1.amazonaws.com Content-Type: application/json X-Amz-Date: 20170503T220604Z Authorization: AWS4-HMAC-SHA256 Credential={access-key-id}/20170503/us-east-1/apigateway/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=59b42fe54a76a5de8adf2c67baa6d39206f8e9ad49a1d77ccc6a5da3103a398a Cache-Control: no-cache Postman-Token: 5637af27-dc29-fc5c-9dfe-0645d52cb515 

Response

The successful operation returns the 200 OK status code and a payload similar to the following:

{ \"_links\": { \"curies\": { \"href\": \"http://docs.aws.amazon.com/apigateway/latest/developerguide/restapi-gatewayresponse-{rel}.html\", \"name\": \"gatewayresponse\", \"templated\": true }, \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses\" }, \"first\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses\" }, \"gatewayresponse:by-type\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"item\": [ { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_FAILURE\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/RESOURCE_NOT_FOUND\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/REQUEST_TOO_LARGE\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/THROTTLED\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNSUPPORTED_MEDIA_TYPE\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_CONFIGURATION_ERROR\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_5XX\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_4XX\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_PARAMETERS\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_BODY\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/EXPIRED_TOKEN\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/ACCESS_DENIED\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_API_KEY\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNAUTHORIZED\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/API_CONFIGURATION_ERROR\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/QUOTA_EXCEEDED\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_TIMEOUT\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_SIGNATURE\" }, { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_FAILURE\" } ] }, \"_embedded\": { \"item\": [ { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_FAILURE\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_FAILURE\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"INTEGRATION_FAILURE\", \"statusCode\": \"504\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/RESOURCE_NOT_FOUND\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/RESOURCE_NOT_FOUND\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"RESOURCE_NOT_FOUND\", \"statusCode\": \"404\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/REQUEST_TOO_LARGE\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/REQUEST_TOO_LARGE\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"REQUEST_TOO_LARGE\", \"statusCode\": \"413\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/THROTTLED\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/THROTTLED\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"THROTTLED\", \"statusCode\": \"429\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNSUPPORTED_MEDIA_TYPE\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNSUPPORTED_MEDIA_TYPE\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"UNSUPPORTED_MEDIA_TYPE\", \"statusCode\": \"415\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_CONFIGURATION_ERROR\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_CONFIGURATION_ERROR\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"AUTHORIZER_CONFIGURATION_ERROR\", \"statusCode\": \"500\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_5XX\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_5XX\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"DEFAULT_5XX\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_4XX\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/DEFAULT_4XX\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"DEFAULT_4XX\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_PARAMETERS\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_PARAMETERS\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"BAD_REQUEST_PARAMETERS\", \"statusCode\": \"400\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_BODY\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/BAD_REQUEST_BODY\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"BAD_REQUEST_BODY\", \"statusCode\": \"400\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/EXPIRED_TOKEN\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/EXPIRED_TOKEN\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"EXPIRED_TOKEN\", \"statusCode\": \"403\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/ACCESS_DENIED\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/ACCESS_DENIED\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"ACCESS_DENIED\", \"statusCode\": \"403\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_API_KEY\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_API_KEY\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"INVALID_API_KEY\", \"statusCode\": \"403\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNAUTHORIZED\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/UNAUTHORIZED\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"UNAUTHORIZED\", \"statusCode\": \"401\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/API_CONFIGURATION_ERROR\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/API_CONFIGURATION_ERROR\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"API_CONFIGURATION_ERROR\", \"statusCode\": \"500\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/QUOTA_EXCEEDED\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/QUOTA_EXCEEDED\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"QUOTA_EXCEEDED\", \"statusCode\": \"429\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_TIMEOUT\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INTEGRATION_TIMEOUT\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"INTEGRATION_TIMEOUT\", \"statusCode\": \"504\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/MISSING_AUTHENTICATION_TOKEN\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"MISSING_AUTHENTICATION_TOKEN\", \"statusCode\": \"403\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_SIGNATURE\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/INVALID_SIGNATURE\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"INVALID_SIGNATURE\", \"statusCode\": \"403\" }, { \"_links\": { \"self\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_FAILURE\" }, \"gatewayresponse:put\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/{response_type}\", \"templated\": true }, \"gatewayresponse:update\": { \"href\": \"/restapis/o81lxisefl/gatewayresponses/AUTHORIZER_FAILURE\" } }, \"defaultResponse\": true, \"responseParameters\": {}, \"responseTemplates\": { \"application/json\": \"{\\\"message\\\":$context.error.messageString}\" }, \"responseType\": \"AUTHORIZER_FAILURE\", \"statusCode\": \"500\" } ] } }

Customize Gateway Responses
" + }, "GenerateClientCertificateRequest":{ "type":"structure", "members":{ @@ -3232,7 +3463,7 @@ }, "limit":{ "shape":"NullableInteger", - "documentation":"

The maximum number of ApiKeys to get information about.

", + "documentation":"

The maximum number of returned results per page.

", "location":"querystring", "locationName":"limit" }, @@ -3266,7 +3497,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Authorizer resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3285,7 +3516,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Authorizers resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3391,7 +3622,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Deployment resource to get information about.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3416,7 +3647,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the collection of Deployment resources to get information about.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3444,13 +3675,13 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-retrieved documentation part.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, "documentationPartId":{ "shape":"String", - "documentation":"

[Required] The identifier of the to-be-retrieved documentation part.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"part_id" } @@ -3463,7 +3694,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the API of the to-be-retrieved documentation parts.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3496,6 +3727,12 @@ "documentation":"

The maximum number of returned results per page.

", "location":"querystring", "locationName":"limit" + }, + "locationStatus":{ + "shape":"LocationStatusType", + "documentation":"

The status of the API documentation parts to retrieve. Valid values are DOCUMENTED for retrieving DocumentationPart resources with content and UNDOCUMENTED for DocumentationPart resources without content.

", + "location":"querystring", + "locationName":"locationStatus" } }, "documentation":"

Gets the documentation parts of an API. The result may be filtered by the type, name, or path of API entities (targets).

" @@ -3509,7 +3746,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the API of the to-be-retrieved documentation snapshot.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3528,7 +3765,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-retrieved documentation versions.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3588,7 +3825,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi to be exported.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3618,6 +3855,53 @@ }, "documentation":"

Request a new export of a RestApi for a particular Stage.

" }, + "GetGatewayResponseRequest":{ + "type":"structure", + "required":[ + "restApiId", + "responseType" + ], + "members":{ + "restApiId":{ + "shape":"String", + "documentation":"

The string identifier of the associated RestApi.

", + "location":"uri", + "locationName":"restapi_id" + }, + "responseType":{ + "shape":"GatewayResponseType", + "documentation":"

The response type of the associated GatewayResponse. Valid values are

", + "location":"uri", + "locationName":"response_type" + } + }, + "documentation":"

Gets a GatewayResponse of a specified response type on the given RestApi.

" + }, + "GetGatewayResponsesRequest":{ + "type":"structure", + "required":["restApiId"], + "members":{ + "restApiId":{ + "shape":"String", + "documentation":"

The string identifier of the associated RestApi.

", + "location":"uri", + "locationName":"restapi_id" + }, + "position":{ + "shape":"String", + "documentation":"

The current pagination position in the paged result set. The GatewayResponse collection does not support pagination and the position does not apply here.

", + "location":"querystring", + "locationName":"position" + }, + "limit":{ + "shape":"NullableInteger", + "documentation":"

The maximum number of returned results per page. The GatewayResponses collection does not support pagination and the limit does not apply here.

", + "location":"querystring", + "locationName":"limit" + } + }, + "documentation":"

Gets the GatewayResponses collection on the given RestApi. If an API developer has not added any definitions for gateway responses, the result will be the Amazon API Gateway-generated default GatewayResponses collection for the supported response types.

" + }, "GetIntegrationRequest":{ "type":"structure", "required":[ @@ -3628,7 +3912,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a get integration request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3658,7 +3942,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a get integration response request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3693,7 +3977,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Method resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3723,7 +4007,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the MethodResponse resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3785,7 +4069,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The ID of the RestApi under which the model exists.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3804,7 +4088,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3832,7 +4116,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the RestApi to which the specified RequestValidator belongs.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3851,7 +4135,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of a RestApi to which the RequestValidators collection belongs.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3879,7 +4163,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3904,7 +4188,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3970,7 +4254,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi that the SDK will use.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -3982,13 +4266,13 @@ }, "sdkType":{ "shape":"String", - "documentation":"

The language for the generated SDK. Currently javascript, android, and objectivec (for iOS) are supported.

", + "documentation":"

The language for the generated SDK. Currently java, javascript, android, objectivec (for iOS), swift (for iOS), and ruby are supported.

", "location":"uri", "locationName":"sdk_type" }, "parameters":{ "shape":"MapOfStringToString", - "documentation":"

A key-value map of query string parameters that specify properties of the SDK, depending on the requested sdkType. For sdkType of objectivec, a parameter named classPrefix is required. For sdkType of android, parameters named groupId, artifactId, artifactVersion, and invokerPackage are required.

", + "documentation":"

A string-to-string key-value map of query parameters sdkType-dependent properties of the SDK. For sdkType of objectivec or swift, a parameter named classPrefix is required. For sdkType of android, parameters named groupId, artifactId, artifactVersion, and invokerPackage are required. For sdkType of java, parameters named serviceName and javaPackageName are required.

", "location":"querystring" } }, @@ -4034,7 +4318,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Stage resource to get information about.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4053,7 +4337,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The stages' API identifiers.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4239,7 +4523,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-imported documentation parts.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4275,12 +4559,12 @@ }, "parameters":{ "shape":"MapOfStringToString", - "documentation":"

Custom header parameters as part of the request.

", + "documentation":"

A key-value map of context-specific query string parameters specifying the behavior of different API importing operations. The following shows operation-specific parameters and their supported values.

To exclude DocumentationParts from the import, set parameters as ignore=documentation.

To configure the endpoint type, set parameters as endpointConfigurationTypes=EDGE orendpointConfigurationTypes=REGIONAL. The default endpoint type is EDGE.

To handle imported basePath, set parameters as basePath=ignore, basePath=prepend or basePath=split.

For example, the AWS CLI command to exclude documentation from the imported API is:

aws apigateway import-rest-api --parameters ignore=documentation --body 'file:///path/to/imported-api-body.json

The AWS CLI command to set the regional endpoint on the imported API is:

aws apigateway import-rest-api --parameters endpointConfigurationTypes=REGIONAL --body 'file:///path/to/imported-api-body.json
", "location":"querystring" }, "body":{ "shape":"Blob", - "documentation":"

The POST request body containing external API definitions. Currently, only Swagger definition JSON files are supported.

" + "documentation":"

The POST request body containing external API definitions. Currently, only Swagger definition JSON files are supported. The maximum size of the API definition file is 2MB.

" } }, "documentation":"

A POST request to import an API to Amazon API Gateway using an input of an API definition file.

", @@ -4384,6 +4668,7 @@ }, "message":{"shape":"String"} }, + "documentation":"

The request exceeded the rate limit. Retry after the specified time period.

", "error":{"httpStatusCode":429}, "exception":true }, @@ -4427,6 +4712,14 @@ "type":"list", "member":{"shape":"DomainName"} }, + "ListOfEndpointType":{ + "type":"list", + "member":{"shape":"EndpointType"} + }, + "ListOfGatewayResponse":{ + "type":"list", + "member":{"shape":"GatewayResponse"} + }, "ListOfLong":{ "type":"list", "member":{"shape":"Long"} @@ -4484,6 +4777,13 @@ "type":"list", "member":{"shape":"UsagePlanKey"} }, + "LocationStatusType":{ + "type":"string", + "enum":[ + "DOCUMENTED", + "UNDOCUMENTED" + ] + }, "Long":{"type":"long"}, "MapOfHeaderValues":{ "type":"map", @@ -4668,7 +4968,7 @@ }, "name":{ "shape":"String", - "documentation":"

The name of the model.

" + "documentation":"

The name of the model. Must be an alphanumeric string.

" }, "description":{ "shape":"String", @@ -4691,7 +4991,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfModel", - "documentation":"

Gets the current Model resource in the collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -4702,6 +5002,7 @@ "members":{ "message":{"shape":"String"} }, + "documentation":"

The requested resource is not found. Make sure that the request URI is correct.

", "error":{"httpStatusCode":404}, "exception":true }, @@ -4746,6 +5047,40 @@ "value":{"shape":"MapOfMethodSnapshot"} }, "ProviderARN":{"type":"string"}, + "PutGatewayResponseRequest":{ + "type":"structure", + "required":[ + "restApiId", + "responseType" + ], + "members":{ + "restApiId":{ + "shape":"String", + "documentation":"

The string identifier of the associated RestApi.

", + "location":"uri", + "locationName":"restapi_id" + }, + "responseType":{ + "shape":"GatewayResponseType", + "documentation":"

The response type of the associated GatewayResponse. Valid values are

", + "location":"uri", + "locationName":"response_type" + }, + "statusCode":{ + "shape":"StatusCode", + "documentation":"The HTTP status code of the GatewayResponse." + }, + "responseParameters":{ + "shape":"MapOfStringToString", + "documentation":"

Response parameters (paths, query strings and headers) of the GatewayResponse as a string-to-string map of key-value pairs.

" + }, + "responseTemplates":{ + "shape":"MapOfStringToString", + "documentation":"

Response templates of the GatewayResponse as a string-to-string map of key-value pairs.

" + } + }, + "documentation":"

Creates a customization of a GatewayResponse of a specified response type and status code on the given RestApi.

" + }, "PutIntegrationRequest":{ "type":"structure", "required":[ @@ -4757,7 +5092,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a put integration request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4784,7 +5119,7 @@ }, "uri":{ "shape":"String", - "documentation":"

Specifies a put integration input's Uniform Resource Identifier (URI). When the integration type is HTTP or AWS, this field is required. For integration with Lambda as an AWS service proxy, this value is of the 'arn:aws:apigateway:<region>:lambda:path/2015-03-31/functions/<functionArn>/invocations' format.

" + "documentation":"

Specifies the integration's Uniform Resource Identifier (URI). For HTTP integrations, the URI must be a fully formed, encoded HTTP(S) URL according to the RFC-3986 specification. For AWS integrations, the URI should be of the form arn:aws:apigateway:{region}:{subdomain.service|service}:{path|action}/{service_api}. Region, subdomain and service are used to determine the right endpoint. For AWS services that use the Action= query string parameter, service_api should be a valid action for the desired service. For RESTful AWS service APIs, path is used to indicate that the remaining substring in the URI should be treated as the path to the resource, including the initial /.

" }, "credentials":{ "shape":"String", @@ -4815,7 +5150,7 @@ "documentation":"

Specifies how to handle request payload content type conversions. Supported values are CONVERT_TO_BINARY and CONVERT_TO_TEXT, with the following behaviors:

If this property is not defined, the request payload will be passed through from the method request to integration request without modification, provided that the passthroughBehaviors is configured to support payload pass-through.

" } }, - "documentation":"

Represents a put integration request.

" + "documentation":"

Sets up a method's integration.

" }, "PutIntegrationResponseRequest":{ "type":"structure", @@ -4828,7 +5163,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a put integration response request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4880,7 +5215,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the new Method resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4938,7 +5273,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Method resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -4987,7 +5322,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi to be updated.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5005,12 +5340,12 @@ }, "parameters":{ "shape":"MapOfStringToString", - "documentation":"

Custom headers supplied as part of the request.

", + "documentation":"

Custom header parameters as part of the request. For example, to exclude DocumentationParts from an imported API, set ignore=documentation as a parameters value, as in the AWS CLI command of aws apigateway import-rest-api --parameters ignore=documentation --body 'file:///path/to/imported-api-body.json.

", "location":"querystring" }, "body":{ "shape":"Blob", - "documentation":"

The PUT request body containing external API definitions. Currently, only Swagger definition JSON files are supported.

" + "documentation":"

The PUT request body containing external API definitions. Currently, only Swagger definition JSON files are supported. The maximum size of the API definition file is 2MB.

" } }, "documentation":"

A PUT request to update an existing API, with external API definitions specified as the request body.

", @@ -5070,7 +5405,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfRequestValidator", - "documentation":"

The current page of RequestValidator resources in the RequestValidators collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -5108,7 +5443,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfResource", - "documentation":"

Gets the current Resource resource in the collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -5144,6 +5479,10 @@ "binaryMediaTypes":{ "shape":"ListOfString", "documentation":"

The list of binary media types supported by the RestApi. By default, the RestApi supports only UTF-8-encoded text payloads.

" + }, + "endpointConfiguration":{ + "shape":"EndpointConfiguration", + "documentation":"

The endpoint configuration of this RestApi showing the endpoint types of the API.

" } }, "documentation":"

Represents a REST API.

Create an API
" @@ -5154,7 +5493,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfRestApi", - "documentation":"

An array of links to the current page of RestApi resources.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -5237,7 +5576,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfSdkType", - "documentation":"

The set of SdkType items that comprise this view of the SdkTypes collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -5253,6 +5592,7 @@ }, "message":{"shape":"String"} }, + "documentation":"

The requested service is not available. For details see the accompanying error message. Retry after the specified time period.

", "error":{"httpStatusCode":503}, "exception":true, "fault":true @@ -5316,11 +5656,11 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

A list of Stage resources that are associated with the ApiKey resource.

" + "documentation":"

The string identifier of the associated RestApi.

" }, "stageName":{ "shape":"String", - "documentation":"

The stage name in the RestApi that the stage key references.

" + "documentation":"

The stage name associated with the stage key.

" } }, "documentation":"

A reference to a unique stage identified in the format {restApiId}/{stage}.

" @@ -5330,7 +5670,7 @@ "members":{ "item":{ "shape":"ListOfStage", - "documentation":"

An individual Stage resource.

" + "documentation":"

The current page of elements from this collection.

" } }, "documentation":"

A list of Stage resources that are associated with the ApiKey resource.

Deploying API in Stages
" @@ -5360,7 +5700,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a test invoke authorizer request's RestApi identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5434,7 +5774,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies a test invoke method request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5524,6 +5864,7 @@ }, "message":{"shape":"String"} }, + "documentation":"

The request has reached its throttling limit. Retry after the specified time period.

", "error":{"httpStatusCode":429}, "exception":true }, @@ -5540,6 +5881,7 @@ "members":{ "message":{"shape":"String"} }, + "documentation":"

The request is denied because the caller has insufficient permissions.

", "error":{"httpStatusCode":401}, "exception":true }, @@ -5579,7 +5921,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Authorizer resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5648,7 +5990,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The replacement identifier of the RestApi resource for the Deployment resource to change information about.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5674,7 +6016,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-updated documentation part.

", + "documentation":"

[Required] The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5700,7 +6042,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of an API of the to-be-updated documentation version.

", + "documentation":"

[Required] The string identifier of the associated RestApi..

", "location":"uri", "locationName":"restapi_id" }, @@ -5734,6 +6076,32 @@ }, "documentation":"

A request to change information about the DomainName resource.

" }, + "UpdateGatewayResponseRequest":{ + "type":"structure", + "required":[ + "restApiId", + "responseType" + ], + "members":{ + "restApiId":{ + "shape":"String", + "documentation":"

The string identifier of the associated RestApi.

", + "location":"uri", + "locationName":"restapi_id" + }, + "responseType":{ + "shape":"GatewayResponseType", + "documentation":"

The response type of the associated GatewayResponse. Valid values are

", + "location":"uri", + "locationName":"response_type" + }, + "patchOperations":{ + "shape":"ListOfPatchOperation", + "documentation":"

A list of update operations to be applied to the specified resource and in the order specified in this list.

" + } + }, + "documentation":"

Updates a GatewayResponse of a specified response type on the given RestApi.

" + }, "UpdateIntegrationRequest":{ "type":"structure", "required":[ @@ -5744,7 +6112,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Represents an update integration request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5778,7 +6146,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

Specifies an update integration response request's API identifier.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5817,7 +6185,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Method resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5851,7 +6219,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the MethodResponse resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5889,7 +6257,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier under which the model exists.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5915,7 +6283,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

[Required] The identifier of the RestApi for which the given RequestValidator is updated.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5941,7 +6309,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The RestApi identifier for the Resource resource.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5964,7 +6332,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The ID of the RestApi you want to update.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -5984,7 +6352,7 @@ "members":{ "restApiId":{ "shape":"String", - "documentation":"

The identifier of the RestApi resource for the Stage resource to change information about.

", + "documentation":"

The string identifier of the associated RestApi.

", "location":"uri", "locationName":"restapi_id" }, @@ -6042,7 +6410,7 @@ "documentation":"

A list of update operations to be applied to the specified resource and in the order specified in this list.

" } }, - "documentation":"

The PATCH request to grant a temporary extension to the reamining quota of a usage plan associated with a specified API key.

" + "documentation":"

The PATCH request to grant a temporary extension to the remaining quota of a usage plan associated with a specified API key.

" }, "Usage":{ "type":"structure", @@ -6130,7 +6498,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfUsagePlanKey", - "documentation":"

Gets the current item of the usage plan keys collection.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, @@ -6142,7 +6510,7 @@ "position":{"shape":"String"}, "items":{ "shape":"ListOfUsagePlan", - "documentation":"

Gets the current item when enumerating the collection of UsagePlan.

", + "documentation":"

The current page of elements from this collection.

", "locationName":"item" } }, diff --git a/services/applicationautoscaling/src/main/resources/codegen-resources/service-2.json b/services/applicationautoscaling/src/main/resources/codegen-resources/service-2.json index 679c1b1fbe2a..09b992a52292 100644 --- a/services/applicationautoscaling/src/main/resources/codegen-resources/service-2.json +++ b/services/applicationautoscaling/src/main/resources/codegen-resources/service-2.json @@ -6,6 +6,7 @@ "jsonVersion":"1.1", "protocol":"json", "serviceFullName":"Application Auto Scaling", + "serviceId":"Application Auto Scaling", "signatureVersion":"v4", "signingName":"application-autoscaling", "targetPrefix":"AnyScaleFrontendService", @@ -28,6 +29,22 @@ ], "documentation":"

Deletes the specified Application Auto Scaling scaling policy.

Deleting a policy deletes the underlying alarm action, but does not delete the CloudWatch alarm associated with the scaling policy, even if it no longer has an associated action.

To create a scaling policy or update an existing one, see PutScalingPolicy.

" }, + "DeleteScheduledAction":{ + "name":"DeleteScheduledAction", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteScheduledActionRequest"}, + "output":{"shape":"DeleteScheduledActionResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ObjectNotFoundException"}, + {"shape":"ConcurrentUpdateException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Deletes the specified Application Auto Scaling scheduled action.

" + }, "DeregisterScalableTarget":{ "name":"DeregisterScalableTarget", "http":{ @@ -91,7 +108,23 @@ {"shape":"ConcurrentUpdateException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Provides descriptive information about the scaling policies in the specified namespace.

You can filter the results using the ResourceId, ScalableDimension, and PolicyNames parameters.

To create a scaling policy or update an existing one, see PutScalingPolicy. If you are no longer using a scaling policy, you can delete it using DeleteScalingPolicy.

" + "documentation":"

Describes the scaling policies for the specified service namespace.

You can filter the results using the ResourceId, ScalableDimension, and PolicyNames parameters.

To create a scaling policy or update an existing one, see PutScalingPolicy. If you are no longer using a scaling policy, you can delete it using DeleteScalingPolicy.

" + }, + "DescribeScheduledActions":{ + "name":"DescribeScheduledActions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeScheduledActionsRequest"}, + "output":{"shape":"DescribeScheduledActionsResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"InvalidNextTokenException"}, + {"shape":"ConcurrentUpdateException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Describes the scheduled actions for the specified service namespace.

You can filter the results using the ResourceId, ScalableDimension, and ScheduledActionNames parameters.

To create a scheduled action or update an existing one, see PutScheduledAction. If you are no longer using a scheduled action, you can delete it using DeleteScheduledAction.

" }, "PutScalingPolicy":{ "name":"PutScalingPolicy", @@ -111,6 +144,23 @@ ], "documentation":"

Creates or updates a policy for an Application Auto Scaling scalable target.

Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scaling policy applies to the scalable target identified by those three attributes. You cannot create a scaling policy without first registering a scalable target using RegisterScalableTarget.

To update a policy, specify its policy name and the parameters that you want to change. Any parameters that you don't specify are not changed by this update request.

You can view the scaling policies for a service namespace using DescribeScalingPolicies. If you are no longer using a scaling policy, you can delete it using DeleteScalingPolicy.

" }, + "PutScheduledAction":{ + "name":"PutScheduledAction", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutScheduledActionRequest"}, + "output":{"shape":"PutScheduledActionResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"LimitExceededException"}, + {"shape":"ObjectNotFoundException"}, + {"shape":"ConcurrentUpdateException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Creates or updates a scheduled action for an Application Auto Scaling scalable target.

Each scalable target is identified by a service namespace, resource ID, and scalable dimension. A scheduled action applies to the scalable target identified by those three attributes. You cannot create a scheduled action without first registering a scalable target using RegisterScalableTarget.

To update an action, specify its name and the parameters that you want to change. If you don't specify start and end times, the old values are deleted. Any other parameters that you don't specify are not changed by this update request.

You can view the scheduled actions using DescribeScheduledActions. If you are no longer using a scheduled action, you can delete it using DeleteScheduledAction.

" + }, "RegisterScalableTarget":{ "name":"RegisterScalableTarget", "http":{ @@ -218,11 +268,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" } } }, @@ -231,6 +281,37 @@ "members":{ } }, + "DeleteScheduledActionRequest":{ + "type":"structure", + "required":[ + "ServiceNamespace", + "ScheduledActionName", + "ResourceId" + ], + "members":{ + "ServiceNamespace":{ + "shape":"ServiceNamespace", + "documentation":"

The namespace of the AWS service. For more information, see AWS Service Namespaces in the Amazon Web Services General Reference.

" + }, + "ScheduledActionName":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The name of the scheduled action.

" + }, + "ResourceId":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier.

" + }, + "ScalableDimension":{ + "shape":"ScalableDimension", + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + } + } + }, + "DeleteScheduledActionResponse":{ + "type":"structure", + "members":{ + } + }, "DeregisterScalableTargetRequest":{ "type":"structure", "required":[ @@ -245,11 +326,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" } } }, @@ -268,11 +349,11 @@ }, "ResourceIds":{ "shape":"ResourceIdsMaxLen1600", - "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" }, "MaxResults":{ "shape":"MaxResults", @@ -307,11 +388,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" }, "MaxResults":{ "shape":"MaxResults", @@ -350,11 +431,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" }, "MaxResults":{ "shape":"MaxResults", @@ -371,7 +452,7 @@ "members":{ "ScalingPolicies":{ "shape":"ScalingPolicies", - "documentation":"

A list of scaling policy objects.

" + "documentation":"

Information about the scaling policies.

" }, "NextToken":{ "shape":"XmlString", @@ -379,6 +460,50 @@ } } }, + "DescribeScheduledActionsRequest":{ + "type":"structure", + "required":["ServiceNamespace"], + "members":{ + "ScheduledActionNames":{ + "shape":"ResourceIdsMaxLen1600", + "documentation":"

The names of the scheduled actions to describe.

" + }, + "ServiceNamespace":{ + "shape":"ServiceNamespace", + "documentation":"

The namespace of the AWS service. For more information, see AWS Service Namespaces in the Amazon Web Services General Reference.

" + }, + "ResourceId":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier. If you specify a scalable dimension, you must also specify a resource ID.

" + }, + "ScalableDimension":{ + "shape":"ScalableDimension", + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property. If you specify a scalable dimension, you must also specify a resource ID.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of scheduled action results. This value can be between 1 and 50. The default value is 50.

If this parameter is used, the operation returns up to MaxResults results at a time, along with a NextToken value. To get the next set of results, include the NextToken value in a subsequent call. If this parameter is not used, the operation returns up to 50 results and a NextToken value, if applicable.

" + }, + "NextToken":{ + "shape":"XmlString", + "documentation":"

The token for the next set of results.

" + } + } + }, + "DescribeScheduledActionsResponse":{ + "type":"structure", + "members":{ + "ScheduledActions":{ + "shape":"ScheduledActions", + "documentation":"

Information about the scheduled actions.

" + }, + "NextToken":{ + "shape":"XmlString", + "documentation":"

The token required to get the next set of results. This value is null if there are no more results to return.

" + } + } + }, + "DisableScaleIn":{"type":"boolean"}, "ErrorMessage":{"type":"string"}, "FailedResourceAccessException":{ "type":"structure", @@ -462,7 +587,13 @@ "type":"string", "enum":[ "DynamoDBReadCapacityUtilization", - "DynamoDBWriteCapacityUtilization" + "DynamoDBWriteCapacityUtilization", + "ALBRequestCountPerTarget", + "RDSReaderAverageCPUUtilization", + "RDSReaderAverageDatabaseConnections", + "EC2SpotFleetRequestAverageCPUUtilization", + "EC2SpotFleetRequestAverageNetworkIn", + "EC2SpotFleetRequestAverageNetworkOut" ] }, "MetricUnit":{"type":"string"}, @@ -494,11 +625,11 @@ "members":{ "PredefinedMetricType":{ "shape":"MetricType", - "documentation":"

The metric type.

" + "documentation":"

The metric type. The ALBRequestCountPerTarget metric type applies only to Spot fleet requests.

" }, "ResourceLabel":{ "shape":"ResourceLabel", - "documentation":"

Reserved for future use.

" + "documentation":"

Identifies the resource associated with the metric type. You can't specify a resource label unless the metric type is ALBRequestCountPerTarget and there is a target group attached to the Spot fleet request.

The format is app/<load-balancer-name>/<load-balancer-id>/targetgroup/<target-group-name>/<target-group-id>, where:

" } }, "documentation":"

Configures a predefined metric for a target tracking policy.

" @@ -522,11 +653,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" }, "PolicyType":{ "shape":"PolicyType", @@ -556,6 +687,53 @@ } } }, + "PutScheduledActionRequest":{ + "type":"structure", + "required":[ + "ServiceNamespace", + "ScheduledActionName", + "ResourceId" + ], + "members":{ + "ServiceNamespace":{ + "shape":"ServiceNamespace", + "documentation":"

The namespace of the AWS service. For more information, see AWS Service Namespaces in the Amazon Web Services General Reference.

" + }, + "Schedule":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The schedule for this action. The following formats are supported:

At expressions are useful for one-time schedules. Specify the time, in UTC.

For rate expressions, value is a positive integer and unit is minute | minutes | hour | hours | day | days.

For more information about cron expressions, see Cron.

" + }, + "ScheduledActionName":{ + "shape":"ScheduledActionName", + "documentation":"

The name of the scheduled action.

" + }, + "ResourceId":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The identifier of the resource associated with the scheduled action. This string consists of the resource type and unique identifier.

" + }, + "ScalableDimension":{ + "shape":"ScalableDimension", + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + }, + "StartTime":{ + "shape":"TimestampType", + "documentation":"

The date and time for the scheduled action to start.

" + }, + "EndTime":{ + "shape":"TimestampType", + "documentation":"

The date and time for the scheduled action to end.

" + }, + "ScalableTargetAction":{ + "shape":"ScalableTargetAction", + "documentation":"

The new minimum and maximum capacity. You can set both values or just one. During the scheduled time, if the current capacity is below the minimum capacity, Application Auto Scaling scales out to the minimum capacity. If the current capacity is above the maximum capacity, Application Auto Scaling scales in to the maximum capacity.

" + } + } + }, + "PutScheduledActionResponse":{ + "type":"structure", + "members":{ + } + }, "RegisterScalableTargetRequest":{ "type":"structure", "required":[ @@ -570,11 +748,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" }, "MinCapacity":{ "shape":"ResourceCapacity", @@ -586,7 +764,7 @@ }, "RoleARN":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf. This parameter is required when you register a scalable target and optional when you update one.

" + "documentation":"

The ARN of an IAM role that allows Application Auto Scaling to modify the scalable target on your behalf.

With Amazon RDS resources, permissions are granted using a service-linked role. For more information, see Service-Linked Roles for Application Auto Scaling.

For resources that are not supported using a service-linked role, this parameter is required when you register a scalable target and optional when you update one.

" } } }, @@ -625,7 +803,8 @@ "dynamodb:table:ReadCapacityUnits", "dynamodb:table:WriteCapacityUnits", "dynamodb:index:ReadCapacityUnits", - "dynamodb:index:WriteCapacityUnits" + "dynamodb:index:WriteCapacityUnits", + "rds:cluster:ReadReplicaCount" ] }, "ScalableTarget":{ @@ -646,11 +825,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scalable target. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension associated with the scalable target. This string consists of the service namespace, resource type, and scaling property.

" }, "MinCapacity":{ "shape":"ResourceCapacity", @@ -671,6 +850,20 @@ }, "documentation":"

Represents a scalable target.

" }, + "ScalableTargetAction":{ + "type":"structure", + "members":{ + "MinCapacity":{ + "shape":"ResourceCapacity", + "documentation":"

The minimum capacity.

" + }, + "MaxCapacity":{ + "shape":"ResourceCapacity", + "documentation":"

The maximum capacity.

" + } + }, + "documentation":"

Represents the minimum and maximum capacity for a scheduled action.

" + }, "ScalableTargets":{ "type":"list", "member":{"shape":"ScalableTarget"} @@ -702,11 +895,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scaling activity. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" }, "Description":{ "shape":"XmlString", @@ -781,11 +974,11 @@ }, "ResourceId":{ "shape":"ResourceIdMaxLen1600", - "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

" + "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

" }, "ScalableDimension":{ "shape":"ScalableDimension", - "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" }, "PolicyType":{ "shape":"PolicyType", @@ -810,6 +1003,70 @@ }, "documentation":"

Represents a scaling policy.

" }, + "ScheduledAction":{ + "type":"structure", + "required":[ + "ScheduledActionName", + "ScheduledActionARN", + "ServiceNamespace", + "Schedule", + "ResourceId", + "CreationTime" + ], + "members":{ + "ScheduledActionName":{ + "shape":"ScheduledActionName", + "documentation":"

The name of the scheduled action.

" + }, + "ScheduledActionARN":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The Amazon Resource Name (ARN) of the scheduled action.

" + }, + "ServiceNamespace":{ + "shape":"ServiceNamespace", + "documentation":"

The namespace of the AWS service. For more information, see AWS Service Namespaces in the Amazon Web Services General Reference.

" + }, + "Schedule":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The schedule for this action. The following formats are supported:

At expressions are useful for one-time schedules. Specify the time, in UTC.

For rate expressions, value is a positive integer and unit is minute | minutes | hour | hours | day | days.

For more information about cron expressions, see Cron.

" + }, + "ResourceId":{ + "shape":"ResourceIdMaxLen1600", + "documentation":"

The identifier of the resource associated with the scaling policy. This string consists of the resource type and unique identifier.

" + }, + "ScalableDimension":{ + "shape":"ScalableDimension", + "documentation":"

The scalable dimension. This string consists of the service namespace, resource type, and scaling property.

" + }, + "StartTime":{ + "shape":"TimestampType", + "documentation":"

The date and time that the action is scheduled to begin.

" + }, + "EndTime":{ + "shape":"TimestampType", + "documentation":"

The date and time that the action is scheduled to end.

" + }, + "ScalableTargetAction":{ + "shape":"ScalableTargetAction", + "documentation":"

The new minimum and maximum capacity. You can set both values or just one. During the scheduled time, if the current capacity is below the minimum capacity, Application Auto Scaling scales out to the minimum capacity. If the current capacity is above the maximum capacity, Application Auto Scaling scales in to the maximum capacity.

" + }, + "CreationTime":{ + "shape":"TimestampType", + "documentation":"

The date and time that the scheduled action was created.

" + } + }, + "documentation":"

Represents a scheduled action.

" + }, + "ScheduledActionName":{ + "type":"string", + "max":256, + "min":1, + "pattern":"(?!((^[ ]+.*)|(.*([\\u0000-\\u001f]|[\\u007f-\\u009f]|[:/|])+.*)|(.*[ ]+$))).+" + }, + "ScheduledActions":{ + "type":"list", + "member":{"shape":"ScheduledAction"} + }, "ServiceNamespace":{ "type":"string", "enum":[ @@ -817,7 +1074,8 @@ "elasticmapreduce", "ec2", "appstream", - "dynamodb" + "dynamodb", + "rds" ] }, "StepAdjustment":{ @@ -892,6 +1150,10 @@ "ScaleInCooldown":{ "shape":"Cooldown", "documentation":"

The amount of time, in seconds, after a scale in activity completes before another scale in activity can start.

The cooldown period is used to block subsequent scale in requests until it has expired. The intention is to scale in conservatively to protect your application's availability. However, if another alarm triggers a scale out policy during the cooldown period after a scale-in, Application Auto Scaling scales out your scalable target immediately.

" + }, + "DisableScaleIn":{ + "shape":"DisableScaleIn", + "documentation":"

Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove capacity from the scalable resource. Otherwise, scale in is enabled and the target tracking policy can remove capacity from the scalable resource. The default value is false.

" } }, "documentation":"

Represents a target tracking scaling policy configuration.

" @@ -910,5 +1172,5 @@ "pattern":"[\\u0020-\\uD7FF\\uE000-\\uFFFD\\uD800\\uDC00-\\uDBFF\\uDFFF\\r\\n\\t]*" } }, - "documentation":"

With Application Auto Scaling, you can automatically scale your AWS resources. The experience similar to that of Auto Scaling. You can use Application Auto Scaling to accomplish the following tasks:

Application Auto Scaling can scale the following AWS resources:

For a list of supported regions, see AWS Regions and Endpoints: Application Auto Scaling in the AWS General Reference.

" + "documentation":"

With Application Auto Scaling, you can automatically scale your AWS resources. The experience is similar to that of Auto Scaling. You can use Application Auto Scaling to accomplish the following tasks:

Application Auto Scaling can scale the following AWS resources:

For a list of supported regions, see AWS Regions and Endpoints: Application Auto Scaling in the AWS General Reference.

" } diff --git a/services/appstream/src/main/resources/codegen-resources/service-2.json b/services/appstream/src/main/resources/codegen-resources/service-2.json index 1f0ce6163385..7399924bd6bc 100644 --- a/services/appstream/src/main/resources/codegen-resources/service-2.json +++ b/services/appstream/src/main/resources/codegen-resources/service-2.json @@ -24,9 +24,24 @@ {"shape":"LimitExceededException"}, {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"}, - {"shape":"IncompatibleImageException"} + {"shape":"IncompatibleImageException"}, + {"shape":"OperationNotPermittedException"} ], - "documentation":"

Associate a fleet to a stack.

" + "documentation":"

Associates the specified fleet with the specified stack.

" + }, + "CreateDirectoryConfig":{ + "name":"CreateDirectoryConfig", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDirectoryConfigRequest"}, + "output":{"shape":"CreateDirectoryConfigResult"}, + "errors":[ + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Creates a directory configuration.

" }, "CreateFleet":{ "name":"CreateFleet", @@ -42,9 +57,43 @@ {"shape":"ResourceNotFoundException"}, {"shape":"LimitExceededException"}, {"shape":"InvalidRoleException"}, - {"shape":"ConcurrentModificationException"} + {"shape":"ConcurrentModificationException"}, + {"shape":"InvalidParameterCombinationException"}, + {"shape":"IncompatibleImageException"} ], - "documentation":"

Creates a new fleet.

" + "documentation":"

Creates a fleet.

" + }, + "CreateImageBuilder":{ + "name":"CreateImageBuilder", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateImageBuilderRequest"}, + "output":{"shape":"CreateImageBuilderResult"}, + "errors":[ + {"shape":"LimitExceededException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"ResourceNotAvailableException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidRoleException"}, + {"shape":"ConcurrentModificationException"}, + {"shape":"InvalidParameterCombinationException"}, + {"shape":"IncompatibleImageException"} + ] + }, + "CreateImageBuilderStreamingURL":{ + "name":"CreateImageBuilderStreamingURL", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateImageBuilderStreamingURLRequest"}, + "output":{"shape":"CreateImageBuilderStreamingURLResult"}, + "errors":[ + {"shape":"OperationNotPermittedException"}, + {"shape":"ResourceNotFoundException"} + ] }, "CreateStack":{ "name":"CreateStack", @@ -62,7 +111,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Create a new stack.

" + "documentation":"

Creates a stack.

" }, "CreateStreamingURL":{ "name":"CreateStreamingURL", @@ -78,7 +127,21 @@ {"shape":"OperationNotPermittedException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Creates a URL to start an AppStream 2.0 streaming session for a user. By default, the URL is valid only for 1 minute from the time that it is generated.

" + "documentation":"

Creates a URL to start a streaming session for the specified user.

By default, the URL is valid only for one minute from the time that it is generated.

" + }, + "DeleteDirectoryConfig":{ + "name":"DeleteDirectoryConfig", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDirectoryConfigRequest"}, + "output":{"shape":"DeleteDirectoryConfigResult"}, + "errors":[ + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Deletes the specified directory configuration.

" }, "DeleteFleet":{ "name":"DeleteFleet", @@ -93,7 +156,36 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Deletes a fleet.

" + "documentation":"

Deletes the specified fleet.

" + }, + "DeleteImage":{ + "name":"DeleteImage", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteImageRequest"}, + "output":{"shape":"DeleteImageResult"}, + "errors":[ + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"OperationNotPermittedException"}, + {"shape":"ConcurrentModificationException"} + ] + }, + "DeleteImageBuilder":{ + "name":"DeleteImageBuilder", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteImageBuilderRequest"}, + "output":{"shape":"DeleteImageBuilderResult"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"OperationNotPermittedException"}, + {"shape":"ConcurrentModificationException"} + ] }, "DeleteStack":{ "name":"DeleteStack", @@ -108,7 +200,20 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Deletes the stack. After this operation completes, the environment can no longer be activated, and any reservations made for the stack are released.

" + "documentation":"

Deletes the specified stack. After this operation completes, the environment can no longer be activated and any reservations made for the stack are released.

" + }, + "DescribeDirectoryConfigs":{ + "name":"DescribeDirectoryConfigs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDirectoryConfigsRequest"}, + "output":{"shape":"DescribeDirectoryConfigsResult"}, + "errors":[ + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Describes the specified directory configurations.

" }, "DescribeFleets":{ "name":"DescribeFleets", @@ -121,7 +226,19 @@ "errors":[ {"shape":"ResourceNotFoundException"} ], - "documentation":"

If fleet names are provided, this operation describes the specified fleets; otherwise, all the fleets in the account are described.

" + "documentation":"

Describes the specified fleets or all fleets in the account.

" + }, + "DescribeImageBuilders":{ + "name":"DescribeImageBuilders", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeImageBuildersRequest"}, + "output":{"shape":"DescribeImageBuildersResult"}, + "errors":[ + {"shape":"ResourceNotFoundException"} + ] }, "DescribeImages":{ "name":"DescribeImages", @@ -134,7 +251,7 @@ "errors":[ {"shape":"ResourceNotFoundException"} ], - "documentation":"

Describes the images. If a list of names is not provided, all images in your account are returned. This operation does not return a paginated result.

" + "documentation":"

Describes the specified images or all images in the account.

" }, "DescribeSessions":{ "name":"DescribeSessions", @@ -147,7 +264,7 @@ "errors":[ {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Describes the streaming sessions for a stack and a fleet. If a user ID is provided, this operation returns streaming sessions for only that user. Pass this value for the nextToken parameter in a subsequent call to this operation to retrieve the next set of items. If an authentication type is not provided, the operation defaults to users authenticated using a streaming URL.

" + "documentation":"

Describes the streaming sessions for the specified stack and fleet. If a user ID is provided, only the streaming sessions for only that user are returned. If an authentication type is not provided, the default is to authenticate users using a streaming URL.

" }, "DescribeStacks":{ "name":"DescribeStacks", @@ -160,7 +277,7 @@ "errors":[ {"shape":"ResourceNotFoundException"} ], - "documentation":"

If stack names are not provided, this operation describes the specified stacks; otherwise, all stacks in the account are described. Pass the nextToken value in a subsequent call to this operation to retrieve the next set of items.

" + "documentation":"

Describes the specified stacks or all stacks in the account.

" }, "DisassociateFleet":{ "name":"DisassociateFleet", @@ -175,7 +292,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Disassociates a fleet from a stack.

" + "documentation":"

Disassociates the specified fleet from the specified stack.

" }, "ExpireSession":{ "name":"ExpireSession", @@ -185,7 +302,7 @@ }, "input":{"shape":"ExpireSessionRequest"}, "output":{"shape":"ExpireSessionResult"}, - "documentation":"

This operation immediately stops a streaming session.

" + "documentation":"

Stops the specified streaming session.

" }, "ListAssociatedFleets":{ "name":"ListAssociatedFleets", @@ -195,7 +312,7 @@ }, "input":{"shape":"ListAssociatedFleetsRequest"}, "output":{"shape":"ListAssociatedFleetsResult"}, - "documentation":"

Lists all fleets associated with the stack.

" + "documentation":"

Lists the fleets associated with the specified stack.

" }, "ListAssociatedStacks":{ "name":"ListAssociatedStacks", @@ -205,7 +322,7 @@ }, "input":{"shape":"ListAssociatedStacksRequest"}, "output":{"shape":"ListAssociatedStacksResult"}, - "documentation":"

Lists all stacks to which the specified fleet is associated.

" + "documentation":"

Lists the stacks associated with the specified fleet.

" }, "StartFleet":{ "name":"StartFleet", @@ -221,7 +338,21 @@ {"shape":"LimitExceededException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Starts a fleet.

" + "documentation":"

Starts the specified fleet.

" + }, + "StartImageBuilder":{ + "name":"StartImageBuilder", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartImageBuilderRequest"}, + "output":{"shape":"StartImageBuilderResult"}, + "errors":[ + {"shape":"ResourceNotAvailableException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConcurrentModificationException"} + ] }, "StopFleet":{ "name":"StopFleet", @@ -235,7 +366,36 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Stops a fleet.

" + "documentation":"

Stops the specified fleet.

" + }, + "StopImageBuilder":{ + "name":"StopImageBuilder", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopImageBuilderRequest"}, + "output":{"shape":"StopImageBuilderResult"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"OperationNotPermittedException"}, + {"shape":"ConcurrentModificationException"} + ] + }, + "UpdateDirectoryConfig":{ + "name":"UpdateDirectoryConfig", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateDirectoryConfigRequest"}, + "output":{"shape":"UpdateDirectoryConfigResult"}, + "errors":[ + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Updates the specified directory configuration.

" }, "UpdateFleet":{ "name":"UpdateFleet", @@ -253,9 +413,10 @@ {"shape":"ResourceNotAvailableException"}, {"shape":"InvalidParameterCombinationException"}, {"shape":"ConcurrentModificationException"}, - {"shape":"IncompatibleImageException"} + {"shape":"IncompatibleImageException"}, + {"shape":"OperationNotPermittedException"} ], - "documentation":"

Updates an existing fleet. All the attributes except the fleet name can be updated in the STOPPED state. When a fleet is in the RUNNING state, only DisplayName and ComputeCapacity can be updated. A fleet cannot be updated in a status of STARTING or STOPPING.

" + "documentation":"

Updates the specified fleet.

If the fleet is in the STOPPED state, you can update any attribute except the fleet name. If the fleet is in the RUNNING state, you can update the DisplayName and ComputeCapacity attributes. If the fleet is in the STARTING or STOPPING state, you can't update it.

" }, "UpdateStack":{ "name":"UpdateStack", @@ -273,24 +434,35 @@ {"shape":"LimitExceededException"}, {"shape":"IncompatibleImageException"} ], - "documentation":"

Updates the specified fields in the stack with the specified name.

" + "documentation":"

Updates the specified stack.

" } }, "shapes":{ + "AccountName":{ + "type":"string", + "min":1, + "sensitive":true + }, + "AccountPassword":{ + "type":"string", + "max":127, + "min":1, + "sensitive":true + }, "Application":{ "type":"structure", "members":{ "Name":{ "shape":"String", - "documentation":"

The unique identifier for the application.

" + "documentation":"

The name of the application.

" }, "DisplayName":{ "shape":"String", - "documentation":"

The name of the application shown to the end users.

" + "documentation":"

The application name displayed to end users.

" }, "IconURL":{ "shape":"String", - "documentation":"

The URL for the application icon. This URL may be time-limited.

" + "documentation":"

The URL for the application icon. This URL might be time-limited.

" }, "LaunchPath":{ "shape":"String", @@ -298,18 +470,18 @@ }, "LaunchParameters":{ "shape":"String", - "documentation":"

A list of arguments that are passed to the application at launch.

" + "documentation":"

The arguments that are passed to the application at launch.

" }, "Enabled":{ "shape":"Boolean", - "documentation":"

An application can be disabled after image creation if there is a problem.

" + "documentation":"

If there is a problem, the application can be disabled after image creation.

" }, "Metadata":{ "shape":"Metadata", "documentation":"

Additional attributes that describe the application.

" } }, - "documentation":"

An entry for a single application in the application catalog.

" + "documentation":"

Describes an application in the application catalog.

" }, "Applications":{ "type":"list", @@ -328,11 +500,11 @@ "members":{ "FleetName":{ "shape":"String", - "documentation":"

The name of the fleet to associate.

" + "documentation":"

The name of the fleet.

" }, "StackName":{ "shape":"String", - "documentation":"

The name of the stack to which the fleet is associated.

" + "documentation":"

The name of the stack.

" } } }, @@ -360,7 +532,7 @@ "documentation":"

The desired number of streaming instances.

" } }, - "documentation":"

The capacity configuration for the fleet.

" + "documentation":"

Describes the capacity for a fleet.

" }, "ComputeCapacityStatus":{ "type":"structure", @@ -376,14 +548,14 @@ }, "InUse":{ "shape":"Integer", - "documentation":"

The number of instances that are being used for streaming.

" + "documentation":"

The number of instances in use for streaming.

" }, "Available":{ "shape":"Integer", "documentation":"

The number of currently available instances that can be used to stream sessions.

" } }, - "documentation":"

The capacity information for the fleet.

" + "documentation":"

Describes the capacity status for a fleet.

" }, "ConcurrentModificationException":{ "type":"structure", @@ -393,6 +565,37 @@ "documentation":"

An API error occurred. Wait a few minutes and try again.

", "exception":true }, + "CreateDirectoryConfigRequest":{ + "type":"structure", + "required":[ + "DirectoryName", + "OrganizationalUnitDistinguishedNames", + "ServiceAccountCredentials" + ], + "members":{ + "DirectoryName":{ + "shape":"DirectoryName", + "documentation":"

The fully qualified name of the directory (for example, corp.example.com).

" + }, + "OrganizationalUnitDistinguishedNames":{ + "shape":"OrganizationalUnitDistinguishedNamesList", + "documentation":"

The distinguished names of the organizational units for computer accounts.

" + }, + "ServiceAccountCredentials":{ + "shape":"ServiceAccountCredentials", + "documentation":"

The credentials for the service account used by the streaming instance to connect to the directory.

" + } + } + }, + "CreateDirectoryConfigResult":{ + "type":"structure", + "members":{ + "DirectoryConfig":{ + "shape":"DirectoryConfig", + "documentation":"

Information about the directory configuration.

" + } + } + }, "CreateFleetRequest":{ "type":"structure", "required":[ @@ -404,19 +607,20 @@ "members":{ "Name":{ "shape":"Name", - "documentation":"

A unique identifier for the fleet.

" + "documentation":"

A unique name for the fleet.

" }, "ImageName":{ "shape":"String", - "documentation":"

Unique name of the image used by the fleet.

" + "documentation":"

The name of the image used by the fleet.

" }, "InstanceType":{ "shape":"String", - "documentation":"

The instance type of compute resources for the fleet. Fleet instances are launched from this instance type.

" + "documentation":"

The instance type to use when launching fleet instances. The following instance types are available:

" }, + "FleetType":{"shape":"FleetType"}, "ComputeCapacity":{ "shape":"ComputeCapacity", - "documentation":"

The parameters for the capacity allocated to the fleet.

" + "documentation":"

The desired capacity for the fleet.

" }, "VpcConfig":{ "shape":"VpcConfig", @@ -424,55 +628,97 @@ }, "MaxUserDurationInSeconds":{ "shape":"Integer", - "documentation":"

The maximum time for which a streaming session can run. The input can be any numeric value in seconds between 600 and 57600.

" + "documentation":"

The maximum time that a streaming session can run, in seconds. Specify a value between 600 and 57600.

" }, "DisconnectTimeoutInSeconds":{ "shape":"Integer", - "documentation":"

The time after disconnection when a session is considered to have ended. If a user who got disconnected reconnects within this timeout interval, the user is connected back to their previous session. The input can be any numeric value in seconds between 60 and 57600.

" + "documentation":"

The time after disconnection when a session is considered to have ended, in seconds. If a user who was disconnected reconnects within this time interval, the user is connected to their previous session. Specify a value between 60 and 57600.

" }, "Description":{ "shape":"Description", - "documentation":"

The description of the fleet.

" + "documentation":"

The description displayed to end users.

" }, "DisplayName":{ "shape":"DisplayName", - "documentation":"

The display name of the fleet.

" + "documentation":"

The fleet name displayed to end users.

" }, "EnableDefaultInternetAccess":{ "shape":"BooleanObject", - "documentation":"

Enables or disables default Internet access for the fleet.

" + "documentation":"

Enables or disables default internet access for the fleet.

" + }, + "DomainJoinInfo":{ + "shape":"DomainJoinInfo", + "documentation":"

The information needed for streaming instances to join a domain.

" } - }, - "documentation":"

Contains the parameters for the new fleet to create.

" + } }, "CreateFleetResult":{ "type":"structure", "members":{ "Fleet":{ "shape":"Fleet", - "documentation":"

The details for the created fleet.

" + "documentation":"

Information about the fleet.

" } } }, + "CreateImageBuilderRequest":{ + "type":"structure", + "required":[ + "Name", + "ImageName", + "InstanceType" + ], + "members":{ + "Name":{"shape":"Name"}, + "ImageName":{"shape":"String"}, + "InstanceType":{"shape":"String"}, + "Description":{"shape":"Description"}, + "DisplayName":{"shape":"DisplayName"}, + "VpcConfig":{"shape":"VpcConfig"}, + "EnableDefaultInternetAccess":{"shape":"BooleanObject"}, + "DomainJoinInfo":{"shape":"DomainJoinInfo"} + } + }, + "CreateImageBuilderResult":{ + "type":"structure", + "members":{ + "ImageBuilder":{"shape":"ImageBuilder"} + } + }, + "CreateImageBuilderStreamingURLRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"String"}, + "Validity":{"shape":"Long"} + } + }, + "CreateImageBuilderStreamingURLResult":{ + "type":"structure", + "members":{ + "StreamingURL":{"shape":"String"}, + "Expires":{"shape":"Timestamp"} + } + }, "CreateStackRequest":{ "type":"structure", "required":["Name"], "members":{ "Name":{ "shape":"String", - "documentation":"

The unique identifier for this stack.

" + "documentation":"

The name of the stack.

" }, "Description":{ "shape":"Description", - "documentation":"

The description displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The description displayed to end users.

" }, "DisplayName":{ "shape":"DisplayName", - "documentation":"

The name displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The stack name displayed to end users.

" }, "StorageConnectors":{ "shape":"StorageConnectorList", - "documentation":"

The storage connectors to be enabled for the stack.

" + "documentation":"

The storage connectors to enable.

" } } }, @@ -481,7 +727,7 @@ "members":{ "Stack":{ "shape":"Stack", - "documentation":"

The details for the created stack.

" + "documentation":"

Information about the stack.

" } } }, @@ -495,15 +741,15 @@ "members":{ "StackName":{ "shape":"String", - "documentation":"

The stack for which the URL is generated.

" + "documentation":"

The name of the stack.

" }, "FleetName":{ "shape":"String", - "documentation":"

The fleet for which the URL is generated.

" + "documentation":"

The name of the fleet.

" }, "UserId":{ - "shape":"UserId", - "documentation":"

A unique user ID for whom the URL is generated.

" + "shape":"StreamingUrlUserId", + "documentation":"

The ID of the user.

" }, "ApplicationId":{ "shape":"String", @@ -511,11 +757,11 @@ }, "Validity":{ "shape":"Long", - "documentation":"

The duration up to which the URL returned by this action is valid. The input can be any numeric value in seconds between 1 and 604800 seconds.

" + "documentation":"

The time that the streaming URL will be valid, in seconds. Specify a value between 1 and 604800 seconds.

" }, "SessionContext":{ "shape":"String", - "documentation":"

The sessionContext of the streaming URL.

" + "documentation":"

The session context of the streaming URL.

" } } }, @@ -528,17 +774,32 @@ }, "Expires":{ "shape":"Timestamp", - "documentation":"

Elapsed seconds after the Unix epoch, at which time this URL expires.

" + "documentation":"

The elapsed time, in seconds after the Unix epoch, when this URL expires.

" } } }, + "DeleteDirectoryConfigRequest":{ + "type":"structure", + "required":["DirectoryName"], + "members":{ + "DirectoryName":{ + "shape":"DirectoryName", + "documentation":"

The name of the directory configuration.

" + } + } + }, + "DeleteDirectoryConfigResult":{ + "type":"structure", + "members":{ + } + }, "DeleteFleetRequest":{ "type":"structure", "required":["Name"], "members":{ "Name":{ "shape":"String", - "documentation":"

The name of the fleet to be deleted.

" + "documentation":"

The name of the fleet.

" } } }, @@ -547,13 +808,39 @@ "members":{ } }, + "DeleteImageBuilderRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"Name"} + } + }, + "DeleteImageBuilderResult":{ + "type":"structure", + "members":{ + "ImageBuilder":{"shape":"ImageBuilder"} + } + }, + "DeleteImageRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"Name"} + } + }, + "DeleteImageResult":{ + "type":"structure", + "members":{ + "Image":{"shape":"Image"} + } + }, "DeleteStackRequest":{ "type":"structure", "required":["Name"], "members":{ "Name":{ "shape":"String", - "documentation":"

The name of the stack to delete.

" + "documentation":"

The name of the stack.

" } } }, @@ -562,12 +849,42 @@ "members":{ } }, + "DescribeDirectoryConfigsRequest":{ + "type":"structure", + "members":{ + "DirectoryNames":{ + "shape":"DirectoryNameList", + "documentation":"

The directory names.

" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"

The maximum size of each page of results.

" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The pagination token to use to retrieve the next page of results for this operation. If this value is null, it retrieves the first page.

" + } + } + }, + "DescribeDirectoryConfigsResult":{ + "type":"structure", + "members":{ + "DirectoryConfigs":{ + "shape":"DirectoryConfigList", + "documentation":"

Information about the directory configurations.

" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The pagination token to use to retrieve the next page of results for this operation. If there are no more pages, this value is null.

" + } + } + }, "DescribeFleetsRequest":{ "type":"structure", "members":{ "Names":{ "shape":"StringList", - "documentation":"

The fleet names to describe. Use null to describe all the fleets for the AWS account.

" + "documentation":"

The names of the fleets to describe.

" }, "NextToken":{ "shape":"String", @@ -580,7 +897,7 @@ "members":{ "Fleets":{ "shape":"FleetList", - "documentation":"

The list of fleet details.

" + "documentation":"

Information about the fleets.

" }, "NextToken":{ "shape":"String", @@ -588,12 +905,27 @@ } } }, + "DescribeImageBuildersRequest":{ + "type":"structure", + "members":{ + "Names":{"shape":"StringList"}, + "MaxResults":{"shape":"Integer"}, + "NextToken":{"shape":"String"} + } + }, + "DescribeImageBuildersResult":{ + "type":"structure", + "members":{ + "ImageBuilders":{"shape":"ImageBuilderList"}, + "NextToken":{"shape":"String"} + } + }, "DescribeImagesRequest":{ "type":"structure", "members":{ "Names":{ "shape":"StringList", - "documentation":"

A specific list of images to describe.

" + "documentation":"

The names of the images to describe.

" } } }, @@ -602,7 +934,7 @@ "members":{ "Images":{ "shape":"ImageList", - "documentation":"

The list of images.

" + "documentation":"

Information about the images.

" } } }, @@ -615,15 +947,15 @@ "members":{ "StackName":{ "shape":"String", - "documentation":"

The name of the stack for which to list sessions.

" + "documentation":"

The name of the stack.

" }, "FleetName":{ "shape":"String", - "documentation":"

The name of the fleet for which to list sessions.

" + "documentation":"

The name of the fleet.

" }, "UserId":{ "shape":"UserId", - "documentation":"

The user for whom to list sessions. Use null to describe all the sessions for the stack and fleet.

" + "documentation":"

The user ID.

" }, "NextToken":{ "shape":"String", @@ -631,11 +963,11 @@ }, "Limit":{ "shape":"Integer", - "documentation":"

The size of each page of results. The default value is 20 and the maximum supported value is 50.

" + "documentation":"

The size of each page of results. The default value is 20 and the maximum value is 50.

" }, "AuthenticationType":{ "shape":"AuthenticationType", - "documentation":"

The authentication method of the user. It can be API for a user authenticated using a streaming URL, or SAML for a SAML federated user. If an authentication type is not provided, the operation defaults to users authenticated using a streaming URL.

" + "documentation":"

The authentication method. Specify API for a user authenticated using a streaming URL or SAML for a SAML federated user. The default is to authenticate users using a streaming URL.

" } } }, @@ -644,7 +976,7 @@ "members":{ "Sessions":{ "shape":"SessionList", - "documentation":"

The list of streaming sessions.

" + "documentation":"

Information about the streaming sessions.

" }, "NextToken":{ "shape":"String", @@ -657,7 +989,7 @@ "members":{ "Names":{ "shape":"StringList", - "documentation":"

The stack names to describe. Use null to describe all the stacks for the AWS account.

" + "documentation":"

The names of the stacks to describe.

" }, "NextToken":{ "shape":"String", @@ -670,7 +1002,7 @@ "members":{ "Stacks":{ "shape":"StackList", - "documentation":"

The list of stack details.

" + "documentation":"

Information about the stacks.

" }, "NextToken":{ "shape":"String", @@ -682,6 +1014,38 @@ "type":"string", "max":256 }, + "DirectoryConfig":{ + "type":"structure", + "required":["DirectoryName"], + "members":{ + "DirectoryName":{ + "shape":"DirectoryName", + "documentation":"

The fully qualified name of the directory (for example, corp.example.com).

" + }, + "OrganizationalUnitDistinguishedNames":{ + "shape":"OrganizationalUnitDistinguishedNamesList", + "documentation":"

The distinguished names of the organizational units for computer accounts.

" + }, + "ServiceAccountCredentials":{ + "shape":"ServiceAccountCredentials", + "documentation":"

The credentials for the service account used by the streaming instance to connect to the directory.

" + }, + "CreatedTime":{ + "shape":"Timestamp", + "documentation":"

The time the directory configuration was created.

" + } + }, + "documentation":"

Configuration information for the directory used to join domains.

" + }, + "DirectoryConfigList":{ + "type":"list", + "member":{"shape":"DirectoryConfig"} + }, + "DirectoryName":{"type":"string"}, + "DirectoryNameList":{ + "type":"list", + "member":{"shape":"DirectoryName"} + }, "DisassociateFleetRequest":{ "type":"structure", "required":[ @@ -691,11 +1055,11 @@ "members":{ "FleetName":{ "shape":"String", - "documentation":"

The name of the fleet to disassociate.

" + "documentation":"

The name of the fleet.

" }, "StackName":{ "shape":"String", - "documentation":"

The name of the stack with which the fleet is associated.

" + "documentation":"

The name of the stack.

" } } }, @@ -708,6 +1072,20 @@ "type":"string", "max":100 }, + "DomainJoinInfo":{ + "type":"structure", + "members":{ + "DirectoryName":{ + "shape":"DirectoryName", + "documentation":"

The fully qualified name of the directory (for example, corp.example.com).

" + }, + "OrganizationalUnitDistinguishedName":{ + "shape":"OrganizationalUnitDistinguishedName", + "documentation":"

The distinguished name of the organizational unit for computer accounts.

" + } + }, + "documentation":"

Contains the information needed for streaming instances to join a domain.

" + }, "ErrorMessage":{ "type":"string", "documentation":"

The error message in the exception.

" @@ -718,7 +1096,7 @@ "members":{ "SessionId":{ "shape":"String", - "documentation":"

The unique identifier of the streaming session to be stopped.

" + "documentation":"

The ID of the streaming session.

" } } }, @@ -748,11 +1126,11 @@ }, "DisplayName":{ "shape":"String", - "documentation":"

The name displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The fleet name displayed to end users.

" }, "Description":{ "shape":"String", - "documentation":"

The description displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The description displayed to end users.

" }, "ImageName":{ "shape":"String", @@ -760,19 +1138,20 @@ }, "InstanceType":{ "shape":"String", - "documentation":"

The instance type of compute resources for the fleet. The fleet instances are launched from this instance type.

" + "documentation":"

The instance type to use when launching fleet instances.

" }, + "FleetType":{"shape":"FleetType"}, "ComputeCapacityStatus":{ "shape":"ComputeCapacityStatus", - "documentation":"

The capacity information for the fleet.

" + "documentation":"

The capacity status for the fleet.

" }, "MaxUserDurationInSeconds":{ "shape":"Integer", - "documentation":"

The maximum time for which a streaming session can run. The value can be any numeric value in seconds between 600 and 57600.

" + "documentation":"

The maximum time that a streaming session can run, in seconds. Specify a value between 600 and 57600.

" }, "DisconnectTimeoutInSeconds":{ "shape":"Integer", - "documentation":"

The time after disconnection when a session is considered to have ended. If a user who got disconnected reconnects within this timeout interval, the user is connected back to their previous session. The input can be any numeric value in seconds between 60 and 57600.

" + "documentation":"

The time after disconnection when a session is considered to have ended, in seconds. If a user who was disconnected reconnects within this time interval, the user is connected to their previous session. Specify a value between 60 and 57600.

" }, "State":{ "shape":"FleetState", @@ -784,45 +1163,50 @@ }, "CreatedTime":{ "shape":"Timestamp", - "documentation":"

The time at which the fleet was created.

" + "documentation":"

The time the fleet was created.

" }, "FleetErrors":{ "shape":"FleetErrors", - "documentation":"

The list of fleet errors is appended to this list.

" + "documentation":"

The fleet errors.

" }, "EnableDefaultInternetAccess":{ "shape":"BooleanObject", - "documentation":"

Whether default Internet access is enabled for the fleet.

" + "documentation":"

Indicates whether default internet access is enabled for the fleet.

" + }, + "DomainJoinInfo":{ + "shape":"DomainJoinInfo", + "documentation":"

The information needed for streaming instances to join a domain.

" } }, "documentation":"

Contains the parameters for a fleet.

" }, "FleetAttribute":{ "type":"string", - "documentation":"

Fleet attribute.

", + "documentation":"

The fleet attribute.

", "enum":[ "VPC_CONFIGURATION", - "VPC_CONFIGURATION_SECURITY_GROUP_IDS" + "VPC_CONFIGURATION_SECURITY_GROUP_IDS", + "DOMAIN_JOIN_INFO" ] }, "FleetAttributes":{ "type":"list", "member":{"shape":"FleetAttribute"}, - "documentation":"

A list of fleet attributes.

" + "documentation":"

The fleet attributes.

" }, "FleetError":{ "type":"structure", "members":{ "ErrorCode":{ "shape":"FleetErrorCode", - "documentation":"

The error code for the fleet error.

" + "documentation":"

The error code.

" }, "ErrorMessage":{ "shape":"String", - "documentation":"

The error message generated when the fleet has errors.

" + "documentation":"

The error message.

" } }, - "documentation":"

The details of the fleet error.

" + "documentation":"

Describes a fleet error.

" }, "FleetErrorCode":{ "type":"string", @@ -837,7 +1221,22 @@ "IAM_SERVICE_ROLE_MISSING_DESCRIBE_SUBNET_ACTION", "SUBNET_NOT_FOUND", "IMAGE_NOT_FOUND", - "INVALID_SUBNET_CONFIGURATION" + "INVALID_SUBNET_CONFIGURATION", + "SECURITY_GROUPS_NOT_FOUND", + "IGW_NOT_ATTACHED", + "IAM_SERVICE_ROLE_MISSING_DESCRIBE_SECURITY_GROUPS_ACTION", + "DOMAIN_JOIN_ERROR_FILE_NOT_FOUND", + "DOMAIN_JOIN_ERROR_ACCESS_DENIED", + "DOMAIN_JOIN_ERROR_LOGON_FAILURE", + "DOMAIN_JOIN_ERROR_INVALID_PARAMETER", + "DOMAIN_JOIN_ERROR_MORE_DATA", + "DOMAIN_JOIN_ERROR_NO_SUCH_DOMAIN", + "DOMAIN_JOIN_ERROR_NOT_SUPPORTED", + "DOMAIN_JOIN_NERR_INVALID_WORKGROUP_NAME", + "DOMAIN_JOIN_NERR_WORKSTATION_NOT_STARTED", + "DOMAIN_JOIN_ERROR_DS_MACHINE_ACCOUNT_QUOTA_EXCEEDED", + "DOMAIN_JOIN_NERR_PASSWORD_EXPIRED", + "DOMAIN_JOIN_INTERNAL_SERVICE_ERROR" ] }, "FleetErrors":{ @@ -847,7 +1246,7 @@ "FleetList":{ "type":"list", "member":{"shape":"Fleet"}, - "documentation":"

A list of fleets.

" + "documentation":"

The fleets.

" }, "FleetState":{ "type":"string", @@ -858,37 +1257,44 @@ "STOPPED" ] }, + "FleetType":{ + "type":"string", + "enum":[ + "ALWAYS_ON", + "ON_DEMAND" + ] + }, "Image":{ "type":"structure", "required":["Name"], "members":{ "Name":{ "shape":"String", - "documentation":"

The unique identifier for the image.

" + "documentation":"

The name of the image.

" }, "Arn":{ "shape":"Arn", - "documentation":"

The ARN for the image.

" + "documentation":"

The ARN of the image.

" }, "BaseImageArn":{ "shape":"Arn", - "documentation":"

The source image ARN from which this image was created.

" + "documentation":"

The ARN of the image from which this image was created.

" }, "DisplayName":{ "shape":"String", - "documentation":"

The display name for the image.

" + "documentation":"

The image name displayed to end users.

" }, "State":{ "shape":"ImageState", - "documentation":"

The image starts in the PENDING state, and then moves to AVAILABLE if image creation succeeds and FAILED if image creation has failed.

" + "documentation":"

The image starts in the PENDING state. If image creation succeeds, the state is AVAILABLE. If image creation fails, the state is FAILED.

" }, "Visibility":{ "shape":"VisibilityType", - "documentation":"

The visibility of an image to the user; images can be public or private.

" + "documentation":"

Indicates whether the image is public or private.

" }, "ImageBuilderSupported":{ "shape":"Boolean", - "documentation":"

Whether an image builder can be launched from this image.

" + "documentation":"

Indicates whether an image builder can be launched from this image.

" }, "Platform":{ "shape":"PlatformType", @@ -896,7 +1302,7 @@ }, "Description":{ "shape":"String", - "documentation":"

A meaningful description for the image.

" + "documentation":"

The description displayed to end users.

" }, "StateChangeReason":{ "shape":"ImageStateChangeReason", @@ -904,18 +1310,69 @@ }, "Applications":{ "shape":"Applications", - "documentation":"

The applications associated with an image.

" + "documentation":"

The applications associated with the image.

" }, "CreatedTime":{ "shape":"Timestamp", - "documentation":"

The timestamp when the image was created.

" + "documentation":"

The time the image was created.

" }, "PublicBaseImageReleasedDate":{ "shape":"Timestamp", - "documentation":"

The AWS release date of the public base image. For private images, this date is the release date of the base image from which the image was created.

" + "documentation":"

The release date of the public base image. For private images, this date is the release date of the base image from which the image was created.

" } }, - "documentation":"

New streaming instances are booted from images. The image stores the application catalog and is connected to fleets.

" + "documentation":"

Describes an image.

" + }, + "ImageBuilder":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"String"}, + "Arn":{"shape":"Arn"}, + "ImageArn":{"shape":"Arn"}, + "Description":{"shape":"String"}, + "DisplayName":{"shape":"String"}, + "VpcConfig":{"shape":"VpcConfig"}, + "InstanceType":{"shape":"String"}, + "Platform":{"shape":"PlatformType"}, + "State":{"shape":"ImageBuilderState"}, + "StateChangeReason":{"shape":"ImageBuilderStateChangeReason"}, + "CreatedTime":{"shape":"Timestamp"}, + "EnableDefaultInternetAccess":{"shape":"BooleanObject"}, + "DomainJoinInfo":{"shape":"DomainJoinInfo"}, + "ImageBuilderErrors":{"shape":"ResourceErrors"} + } + }, + "ImageBuilderList":{ + "type":"list", + "member":{"shape":"ImageBuilder"} + }, + "ImageBuilderState":{ + "type":"string", + "enum":[ + "PENDING", + "RUNNING", + "STOPPING", + "STOPPED", + "REBOOTING", + "SNAPSHOTTING", + "DELETING", + "FAILED" + ] + }, + "ImageBuilderStateChangeReason":{ + "type":"structure", + "members":{ + "Code":{"shape":"ImageBuilderStateChangeReasonCode"}, + "Message":{"shape":"String"} + } + }, + "ImageBuilderStateChangeReasonCode":{ + "type":"string", + "enum":[ + "INTERNAL_ERROR", + "IMAGE_UNAVAILABLE" + ] }, "ImageList":{ "type":"list", @@ -935,14 +1392,14 @@ "members":{ "Code":{ "shape":"ImageStateChangeReasonCode", - "documentation":"

The state change reason code of the image.

" + "documentation":"

The state change reason code.

" }, "Message":{ "shape":"String", - "documentation":"

The state change reason message to the end user.

" + "documentation":"

The state change reason message.

" } }, - "documentation":"

The reason why the last state change occurred.

" + "documentation":"

Describes the reason why the last state change occurred.

" }, "ImageStateChangeReasonCode":{ "type":"string", @@ -990,7 +1447,7 @@ "members":{ "StackName":{ "shape":"String", - "documentation":"

The name of the stack whose associated fleets are listed.

" + "documentation":"

The name of the stack.

" }, "NextToken":{ "shape":"String", @@ -1003,14 +1460,13 @@ "members":{ "Names":{ "shape":"StringList", - "documentation":"

The names of associated fleets.

" + "documentation":"

The names of the fleets.

" }, "NextToken":{ "shape":"String", "documentation":"

The pagination token to use to retrieve the next page of results for this operation. If there are no more pages, this value is null.

" } - }, - "documentation":"

The response from a successful operation.

" + } }, "ListAssociatedStacksRequest":{ "type":"structure", @@ -1018,7 +1474,7 @@ "members":{ "FleetName":{ "shape":"String", - "documentation":"

The name of the fleet whose associated stacks are listed.

" + "documentation":"

The name of the fleet.

" }, "NextToken":{ "shape":"String", @@ -1031,14 +1487,13 @@ "members":{ "Names":{ "shape":"StringList", - "documentation":"

The names of associated stacks.

" + "documentation":"

The names of the stacks.

" }, "NextToken":{ "shape":"String", "documentation":"

The pagination token to use to retrieve the next page of results for this operation. If there are no more pages, this value is null.

" } - }, - "documentation":"

The response from a successful operation.

" + } }, "Long":{"type":"long"}, "Metadata":{ @@ -1058,6 +1513,14 @@ "documentation":"

The attempted operation is not permitted.

", "exception":true }, + "OrganizationalUnitDistinguishedName":{ + "type":"string", + "max":2000 + }, + "OrganizationalUnitDistinguishedNamesList":{ + "type":"list", + "member":{"shape":"OrganizationalUnitDistinguishedName"} + }, "PlatformType":{ "type":"string", "enum":["WINDOWS"] @@ -1070,6 +1533,18 @@ "documentation":"

The specified resource already exists.

", "exception":true }, + "ResourceError":{ + "type":"structure", + "members":{ + "ErrorCode":{"shape":"FleetErrorCode"}, + "ErrorMessage":{"shape":"String"}, + "ErrorTimestamp":{"shape":"Timestamp"} + } + }, + "ResourceErrors":{ + "type":"list", + "member":{"shape":"ResourceError"} + }, "ResourceIdentifier":{ "type":"string", "documentation":"

The ARN of the resource.

", @@ -1102,9 +1577,27 @@ "SecurityGroupIdList":{ "type":"list", "member":{"shape":"String"}, - "documentation":"

A list of security groups.

", + "documentation":"

The security group IDs.

", "max":5 }, + "ServiceAccountCredentials":{ + "type":"structure", + "required":[ + "AccountName", + "AccountPassword" + ], + "members":{ + "AccountName":{ + "shape":"AccountName", + "documentation":"

The user name of the account. This account must have the following privileges: create computer objects, join computers to the domain, and change/reset the password on descendant computer objects for the organizational units specified.

" + }, + "AccountPassword":{ + "shape":"AccountPassword", + "documentation":"

The password for the account.

" + } + }, + "documentation":"

Describes the credentials for the service account used by the streaming instance to connect to the directory.

" + }, "Session":{ "type":"structure", "required":[ @@ -1117,7 +1610,7 @@ "members":{ "Id":{ "shape":"String", - "documentation":"

The unique ID for a streaming session.

" + "documentation":"

The ID of the streaming session.

" }, "UserId":{ "shape":"UserId", @@ -1125,11 +1618,11 @@ }, "StackName":{ "shape":"String", - "documentation":"

The name of the stack for which the streaming session was created.

" + "documentation":"

The name of the stack for the streaming session.

" }, "FleetName":{ "shape":"String", - "documentation":"

The name of the fleet for which the streaming session was created.

" + "documentation":"

The name of the fleet for the streaming session.

" }, "State":{ "shape":"SessionState", @@ -1137,10 +1630,10 @@ }, "AuthenticationType":{ "shape":"AuthenticationType", - "documentation":"

The authentication method of the user for whom the session was created. It can be API for a user authenticated using a streaming URL or SAML for a SAML federated user.

" + "documentation":"

The authentication method. The user is authenticated using a streaming URL (API) or SAML federation (SAML).

" } }, - "documentation":"

Contains the parameters for a streaming session.

" + "documentation":"

Describes a streaming session.

" }, "SessionList":{ "type":"list", @@ -1166,44 +1659,44 @@ }, "Name":{ "shape":"String", - "documentation":"

The unique identifier of the stack.

" + "documentation":"

The name of the stack.

" }, "Description":{ "shape":"String", - "documentation":"

A meaningful description for the stack.

" + "documentation":"

The description displayed to end users.

" }, "DisplayName":{ "shape":"String", - "documentation":"

A display name for the stack.

" + "documentation":"

The stack name displayed to end users.

" }, "CreatedTime":{ "shape":"Timestamp", - "documentation":"

The timestamp when the stack was created.

" + "documentation":"

The time the stack was created.

" }, "StorageConnectors":{ "shape":"StorageConnectorList", - "documentation":"

The storage connectors to be enabled for the stack.

" + "documentation":"

The storage connectors to enable.

" }, "StackErrors":{ "shape":"StackErrors", - "documentation":"

The list of errors associated with the stack.

" + "documentation":"

The errors for the stack.

" } }, - "documentation":"

Details about a stack.

" + "documentation":"

Describes a stack.

" }, "StackError":{ "type":"structure", "members":{ "ErrorCode":{ "shape":"StackErrorCode", - "documentation":"

The error code of a stack error.

" + "documentation":"

The error code.

" }, "ErrorMessage":{ "shape":"String", - "documentation":"

The error message of a stack error.

" + "documentation":"

The error message.

" } }, - "documentation":"

Contains the parameters for a stack error.

" + "documentation":"

Describes a stack error.

" }, "StackErrorCode":{ "type":"string", @@ -1215,12 +1708,12 @@ "StackErrors":{ "type":"list", "member":{"shape":"StackError"}, - "documentation":"

A list of stack errors.

" + "documentation":"

The stack errors.

" }, "StackList":{ "type":"list", "member":{"shape":"Stack"}, - "documentation":"

A list of stacks.

" + "documentation":"

The stacks.

" }, "StartFleetRequest":{ "type":"structure", @@ -1228,7 +1721,7 @@ "members":{ "Name":{ "shape":"String", - "documentation":"

The name of the fleet to start.

" + "documentation":"

The name of the fleet.

" } } }, @@ -1237,13 +1730,26 @@ "members":{ } }, + "StartImageBuilderRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"String"} + } + }, + "StartImageBuilderResult":{ + "type":"structure", + "members":{ + "ImageBuilder":{"shape":"ImageBuilder"} + } + }, "StopFleetRequest":{ "type":"structure", "required":["Name"], "members":{ "Name":{ "shape":"String", - "documentation":"

The name of the fleet to stop.

" + "documentation":"

The name of the fleet.

" } } }, @@ -1252,31 +1758,50 @@ "members":{ } }, + "StopImageBuilderRequest":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{"shape":"String"} + } + }, + "StopImageBuilderResult":{ + "type":"structure", + "members":{ + "ImageBuilder":{"shape":"ImageBuilder"} + } + }, "StorageConnector":{ "type":"structure", "required":["ConnectorType"], "members":{ "ConnectorType":{ "shape":"StorageConnectorType", - "documentation":"

The type of storage connector. The possible values include: HOMEFOLDERS.

" + "documentation":"

The type of storage connector.

" }, "ResourceIdentifier":{ "shape":"ResourceIdentifier", - "documentation":"

The ARN associated with the storage connector.

" + "documentation":"

The ARN of the storage connector.

" } }, - "documentation":"

Contains the parameters for a storage connector.

" + "documentation":"

Describes a storage connector.

" }, "StorageConnectorList":{ "type":"list", "member":{"shape":"StorageConnector"}, - "documentation":"

A list of storage connectors.

" + "documentation":"

The storage connectors.

" }, "StorageConnectorType":{ "type":"string", - "documentation":"

The type of storage connector. The possible values include: HOMEFOLDERS.

", + "documentation":"

The type of storage connector.

", "enum":["HOMEFOLDERS"] }, + "StreamingUrlUserId":{ + "type":"string", + "max":32, + "min":2, + "pattern":"[\\w+=,.@-]*" + }, "String":{ "type":"string", "min":1 @@ -1288,28 +1813,55 @@ "SubnetIdList":{ "type":"list", "member":{"shape":"String"}, - "documentation":"

A list of subnet IDs.

" + "documentation":"

The subnet IDs.

" }, "Timestamp":{"type":"timestamp"}, + "UpdateDirectoryConfigRequest":{ + "type":"structure", + "required":["DirectoryName"], + "members":{ + "DirectoryName":{ + "shape":"DirectoryName", + "documentation":"

The name of the directory configuration.

" + }, + "OrganizationalUnitDistinguishedNames":{ + "shape":"OrganizationalUnitDistinguishedNamesList", + "documentation":"

The distinguished names of the organizational units for computer accounts.

" + }, + "ServiceAccountCredentials":{ + "shape":"ServiceAccountCredentials", + "documentation":"

The credentials for the service account used by the streaming instance to connect to the directory.

" + } + } + }, + "UpdateDirectoryConfigResult":{ + "type":"structure", + "members":{ + "DirectoryConfig":{ + "shape":"DirectoryConfig", + "documentation":"

Information about the directory configuration.

" + } + } + }, "UpdateFleetRequest":{ "type":"structure", "required":["Name"], "members":{ "ImageName":{ "shape":"String", - "documentation":"

The image name from which a fleet is created.

" + "documentation":"

The name of the image used by the fleet.

" }, "Name":{ "shape":"String", - "documentation":"

The name of the fleet.

" + "documentation":"

A unique name for the fleet.

" }, "InstanceType":{ "shape":"String", - "documentation":"

The instance type of compute resources for the fleet. Fleet instances are launched from this instance type.

" + "documentation":"

The instance type to use when launching fleet instances. The following instance types are available:

" }, "ComputeCapacity":{ "shape":"ComputeCapacity", - "documentation":"

The parameters for the capacity allocated to the fleet.

" + "documentation":"

The desired capacity for the fleet.

" }, "VpcConfig":{ "shape":"VpcConfig", @@ -1317,32 +1869,36 @@ }, "MaxUserDurationInSeconds":{ "shape":"Integer", - "documentation":"

The maximum time for which a streaming session can run. The input can be any numeric value in seconds between 600 and 57600.

" + "documentation":"

The maximum time that a streaming session can run, in seconds. Specify a value between 600 and 57600.

" }, "DisconnectTimeoutInSeconds":{ "shape":"Integer", - "documentation":"

The time after disconnection when a session is considered to have ended. If a user who got disconnected reconnects within this timeout interval, the user is connected back to their previous session. The input can be any numeric value in seconds between 60 and 57600.

" + "documentation":"

The time after disconnection when a session is considered to have ended, in seconds. If a user who was disconnected reconnects within this time interval, the user is connected to their previous session. Specify a value between 60 and 57600.

" }, "DeleteVpcConfig":{ "shape":"Boolean", - "documentation":"

Delete the VPC association for the specified fleet.

", + "documentation":"

Deletes the VPC association for the specified fleet.

", "deprecated":true }, "Description":{ "shape":"Description", - "documentation":"

The description displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The description displayed to end users.

" }, "DisplayName":{ "shape":"DisplayName", - "documentation":"

The name displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The fleet name displayed to end users.

" }, "EnableDefaultInternetAccess":{ "shape":"BooleanObject", - "documentation":"

Enables or disables default Internet access for the fleet.

" + "documentation":"

Enables or disables default internet access for the fleet.

" + }, + "DomainJoinInfo":{ + "shape":"DomainJoinInfo", + "documentation":"

The information needed for streaming instances to join a domain.

" }, "AttributesToDelete":{ "shape":"FleetAttributes", - "documentation":"

Fleet attributes to be deleted.

" + "documentation":"

The fleet attributes to delete.

" } } }, @@ -1351,7 +1907,7 @@ "members":{ "Fleet":{ "shape":"Fleet", - "documentation":"

A list of fleet details.

" + "documentation":"

Information about the fleet.

" } } }, @@ -1361,23 +1917,23 @@ "members":{ "DisplayName":{ "shape":"DisplayName", - "documentation":"

The name displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The stack name displayed to end users.

" }, "Description":{ "shape":"Description", - "documentation":"

The description displayed to end users on the AppStream 2.0 portal.

" + "documentation":"

The description displayed to end users.

" }, "Name":{ "shape":"String", - "documentation":"

The name of the stack to update.

" + "documentation":"

The name of the stack.

" }, "StorageConnectors":{ "shape":"StorageConnectorList", - "documentation":"

The storage connectors to be enabled for the stack.

" + "documentation":"

The storage connectors to enable.

" }, "DeleteStorageConnectors":{ "shape":"Boolean", - "documentation":"

Remove all the storage connectors currently enabled for the stack.

" + "documentation":"

Deletes the storage connectors currently enabled for the stack.

" } } }, @@ -1386,7 +1942,7 @@ "members":{ "Stack":{ "shape":"Stack", - "documentation":"

A list of stack details.

" + "documentation":"

Information about the stack.

" } } }, @@ -1407,15 +1963,15 @@ "members":{ "SubnetIds":{ "shape":"SubnetIdList", - "documentation":"

The list of subnets to which a network interface is established from the fleet instance.

" + "documentation":"

The subnets to which a network interface is established from the fleet instance.

" }, "SecurityGroupIds":{ "shape":"SecurityGroupIdList", - "documentation":"

Security groups associated with the fleet.

" + "documentation":"

The security groups for the fleet.

" } }, - "documentation":"

VPC configuration information.

" + "documentation":"

Describes VPC configuration information.

" } }, - "documentation":"Amazon AppStream 2.0

API documentation for Amazon AppStream 2.0.

" + "documentation":"Amazon AppStream 2.0

You can use Amazon AppStream 2.0 to stream desktop applications to any device running a web browser, without rewriting them.

" } diff --git a/services/appstream/src/main/resources/codegen-resources/waiters-2.json b/services/appstream/src/main/resources/codegen-resources/waiters-2.json index 6672ceed3634..f53f609cb7c3 100644 --- a/services/appstream/src/main/resources/codegen-resources/waiters-2.json +++ b/services/appstream/src/main/resources/codegen-resources/waiters-2.json @@ -9,19 +9,19 @@ { "state": "success", "matcher": "pathAll", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "ACTIVE" }, { "state": "failure", "matcher": "pathAny", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "PENDING_DEACTIVATE" }, { "state": "failure", "matcher": "pathAny", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "INACTIVE" } ] @@ -34,19 +34,19 @@ { "state": "success", "matcher": "pathAll", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "INACTIVE" }, { "state": "failure", "matcher": "pathAny", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "PENDING_ACTIVATE" }, { "state": "failure", "matcher": "pathAny", - "argument": "fleets[].state", + "argument": "Fleets[].State", "expected": "ACTIVE" } ] diff --git a/services/autoscaling/src/main/resources/codegen-resources/service-2.json b/services/autoscaling/src/main/resources/codegen-resources/service-2.json index 731a14d4b5ba..7443c633b267 100644 --- a/services/autoscaling/src/main/resources/codegen-resources/service-2.json +++ b/services/autoscaling/src/main/resources/codegen-resources/service-2.json @@ -108,7 +108,8 @@ "errors":[ {"shape":"LimitExceededFault"}, {"shape":"AlreadyExistsFault"}, - {"shape":"ResourceContentionFault"} + {"shape":"ResourceContentionFault"}, + {"shape":"ResourceInUseFault"} ], "documentation":"

Creates or updates tags for the specified Auto Scaling group.

When you specify a tag with a key that already exists, the operation overwrites the previous tag definition, and you do not get an error message.

For more information, see Tagging Auto Scaling Groups and Instances in the Auto Scaling User Guide.

" }, @@ -199,7 +200,8 @@ }, "input":{"shape":"DeleteTagsType"}, "errors":[ - {"shape":"ResourceContentionFault"} + {"shape":"ResourceContentionFault"}, + {"shape":"ResourceInUseFault"} ], "documentation":"

Deletes the specified tags.

" }, @@ -1354,6 +1356,10 @@ "shape":"InstanceProtected", "documentation":"

Indicates whether newly launched instances are protected from termination by Auto Scaling when scaling in.

" }, + "LifecycleHookSpecificationList":{ + "shape":"LifecycleHookSpecifications", + "documentation":"

One or more lifecycle hooks.

" + }, "Tags":{ "shape":"Tags", "documentation":"

One or more tags.

For more information, see Tagging Auto Scaling Groups and Instances in the Auto Scaling User Guide.

" @@ -1448,6 +1454,37 @@ } } }, + "CustomizedMetricSpecification":{ + "type":"structure", + "required":[ + "MetricName", + "Namespace", + "Statistic" + ], + "members":{ + "MetricName":{ + "shape":"MetricName", + "documentation":"

The name of the metric.

" + }, + "Namespace":{ + "shape":"MetricNamespace", + "documentation":"

The namespace of the metric.

" + }, + "Dimensions":{ + "shape":"MetricDimensions", + "documentation":"

The dimensions of the metric.

" + }, + "Statistic":{ + "shape":"MetricStatistic", + "documentation":"

The statistic of the metric.

" + }, + "Unit":{ + "shape":"MetricUnit", + "documentation":"

The unit of the metric.

" + } + }, + "documentation":"

Configures a customized metric for a target tracking policy.

" + }, "DeleteAutoScalingGroupType":{ "type":"structure", "required":["AutoScalingGroupName"], @@ -1925,6 +1962,7 @@ } } }, + "DisableScaleIn":{"type":"boolean"}, "Ebs":{ "type":"structure", "members":{ @@ -2331,7 +2369,7 @@ }, "NotificationTargetARN":{ "shape":"ResourceName", - "documentation":"

The ARN of the notification target that Auto Scaling uses to notify you when an instance is in the transition state for the lifecycle hook. This ARN target can be either an SQS queue or an SNS topic. The notification message sent to the target includes the following:

" + "documentation":"

The ARN of the target that Auto Scaling sends notifications to when an instance is in the transition state for the lifecycle hook. The notification target can be either an SQS queue or an SNS topic.

" }, "RoleARN":{ "shape":"ResourceName", @@ -2343,7 +2381,7 @@ }, "HeartbeatTimeout":{ "shape":"HeartbeatTimeout", - "documentation":"

The maximum time, in seconds, that can elapse before the lifecycle hook times out. The default is 3600 seconds (1 hour). When the lifecycle hook times out, Auto Scaling performs the default action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat.

" + "documentation":"

The maximum time, in seconds, that can elapse before the lifecycle hook times out. If the lifecycle hook times out, Auto Scaling performs the default action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat.

" }, "GlobalTimeout":{ "shape":"GlobalTimeout", @@ -2354,11 +2392,51 @@ "documentation":"

Defines the action the Auto Scaling group should take when the lifecycle hook timeout elapses or if an unexpected failure occurs. The valid values are CONTINUE and ABANDON. The default value is CONTINUE.

" } }, - "documentation":"

Describes a lifecycle hook, which tells Auto Scaling that you want to perform an action when an instance launches or terminates. When you have a lifecycle hook in place, the Auto Scaling group will either:

For more information, see Auto Scaling Lifecycle in the Auto Scaling User Guide.

" + "documentation":"

Describes a lifecycle hook, which tells Auto Scaling that you want to perform an action whenever it launches instances or whenever it terminates instances.

For more information, see Auto Scaling Lifecycle Hooks in the Auto Scaling User Guide.

" }, "LifecycleHookNames":{ "type":"list", - "member":{"shape":"AsciiStringMaxLen255"} + "member":{"shape":"AsciiStringMaxLen255"}, + "max":50 + }, + "LifecycleHookSpecification":{ + "type":"structure", + "required":["LifecycleHookName"], + "members":{ + "LifecycleHookName":{ + "shape":"AsciiStringMaxLen255", + "documentation":"

The name of the lifecycle hook.

" + }, + "LifecycleTransition":{ + "shape":"LifecycleTransition", + "documentation":"

The state of the EC2 instance to which you want to attach the lifecycle hook. For a list of lifecycle hook types, see DescribeLifecycleHookTypes.

" + }, + "NotificationMetadata":{ + "shape":"XmlStringMaxLen1023", + "documentation":"

Additional information that you want to include any time Auto Scaling sends a message to the notification target.

" + }, + "HeartbeatTimeout":{ + "shape":"HeartbeatTimeout", + "documentation":"

The maximum time, in seconds, that can elapse before the lifecycle hook times out. If the lifecycle hook times out, Auto Scaling performs the default action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat.

" + }, + "DefaultResult":{ + "shape":"LifecycleActionResult", + "documentation":"

Defines the action the Auto Scaling group should take when the lifecycle hook timeout elapses or if an unexpected failure occurs. The valid values are CONTINUE and ABANDON. The default value is CONTINUE.

" + }, + "NotificationTargetARN":{ + "shape":"NotificationTargetResourceName", + "documentation":"

The ARN of the target that Auto Scaling sends notifications to when an instance is in the transition state for the lifecycle hook. The notification target can be either an SQS queue or an SNS topic.

" + }, + "RoleARN":{ + "shape":"ResourceName", + "documentation":"

The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target.

" + } + }, + "documentation":"

Describes a lifecycle hook, which tells Auto Scaling that you want to perform an action whenever it launches instances or whenever it terminates instances.

For more information, see Auto Scaling Lifecycle Hooks in the Auto Scaling User Guide.

" + }, + "LifecycleHookSpecifications":{ + "type":"list", + "member":{"shape":"LifecycleHookSpecification"} }, "LifecycleHooks":{ "type":"list", @@ -2456,6 +2534,30 @@ "type":"list", "member":{"shape":"MetricCollectionType"} }, + "MetricDimension":{ + "type":"structure", + "required":[ + "Name", + "Value" + ], + "members":{ + "Name":{ + "shape":"MetricDimensionName", + "documentation":"

The name of the dimension.

" + }, + "Value":{ + "shape":"MetricDimensionValue", + "documentation":"

The value of the dimension.

" + } + }, + "documentation":"

Describes the dimension of a metric.

" + }, + "MetricDimensionName":{"type":"string"}, + "MetricDimensionValue":{"type":"string"}, + "MetricDimensions":{ + "type":"list", + "member":{"shape":"MetricDimension"} + }, "MetricGranularityType":{ "type":"structure", "members":{ @@ -2470,7 +2572,29 @@ "type":"list", "member":{"shape":"MetricGranularityType"} }, + "MetricName":{"type":"string"}, + "MetricNamespace":{"type":"string"}, "MetricScale":{"type":"double"}, + "MetricStatistic":{ + "type":"string", + "enum":[ + "Average", + "Minimum", + "Maximum", + "SampleCount", + "Sum" + ] + }, + "MetricType":{ + "type":"string", + "enum":[ + "ASGAverageCPUUtilization", + "ASGAverageNetworkIn", + "ASGAverageNetworkOut", + "ALBRequestCountPerTarget" + ] + }, + "MetricUnit":{"type":"string"}, "Metrics":{ "type":"list", "member":{"shape":"XmlStringMaxLen255"} @@ -2531,8 +2655,13 @@ "PolicyARN":{ "shape":"ResourceName", "documentation":"

The Amazon Resource Name (ARN) of the policy.

" + }, + "Alarms":{ + "shape":"Alarms", + "documentation":"

The CloudWatch alarms created for the target tracking policy.

" } - } + }, + "documentation":"

Contains the output of PutScalingPolicy.

" }, "PolicyIncrement":{"type":"integer"}, "PolicyNames":{ @@ -2543,6 +2672,21 @@ "type":"list", "member":{"shape":"XmlStringMaxLen64"} }, + "PredefinedMetricSpecification":{ + "type":"structure", + "required":["PredefinedMetricType"], + "members":{ + "PredefinedMetricType":{ + "shape":"MetricType", + "documentation":"

The metric type.

" + }, + "ResourceLabel":{ + "shape":"XmlStringMaxLen1023", + "documentation":"

Identifies the resource associated with the metric type. The following predefined metrics are available:

For predefined metric types ASGAverageCPUUtilization, ASGAverageNetworkIn and ASGAverageNetworkOut, the parameter must not be specified as the resource associated with the metric type is the Auto Scaling group. For predefined metric type ALBRequestCountPerTarget, the parameter must be specified in the format: app/load-balancer-name/load-balancer-id/targetgroup/target-group-name/target-group-id , where app/load-balancer-name/load-balancer-id is the final portion of the load balancer ARN, and targetgroup/target-group-name/target-group-id is the final portion of the target group ARN. The target group must be attached to the Auto Scaling group.

" + } + }, + "documentation":"

Configures a predefined metric for a target tracking policy.

" + }, "ProcessNames":{ "type":"list", "member":{"shape":"XmlStringMaxLen255"} @@ -2612,7 +2756,7 @@ }, "HeartbeatTimeout":{ "shape":"HeartbeatTimeout", - "documentation":"

The amount of time, in seconds, that can elapse before the lifecycle hook times out. When the lifecycle hook times out, Auto Scaling performs the default action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat. The default is 3600 seconds (1 hour).

" + "documentation":"

The maximum time, in seconds, that can elapse before the lifecycle hook times out. The range is from 30 to 7200 seconds. The default is 3600 seconds (1 hour).

If the lifecycle hook times out, Auto Scaling performs the default action. You can prevent the lifecycle hook from timing out by calling RecordLifecycleActionHeartbeat.

" }, "DefaultResult":{ "shape":"LifecycleActionResult", @@ -2646,8 +2790,7 @@ "type":"structure", "required":[ "AutoScalingGroupName", - "PolicyName", - "AdjustmentType" + "PolicyName" ], "members":{ "AutoScalingGroupName":{ @@ -2660,11 +2803,11 @@ }, "PolicyType":{ "shape":"XmlStringMaxLen64", - "documentation":"

The policy type. Valid values are SimpleScaling and StepScaling. If the policy type is null, the value is treated as SimpleScaling.

" + "documentation":"

The policy type. The valid values are SimpleScaling, StepScaling, and TargetTrackingScaling. If the policy type is null, the value is treated as SimpleScaling.

" }, "AdjustmentType":{ "shape":"XmlStringMaxLen255", - "documentation":"

The adjustment type. Valid values are ChangeInCapacity, ExactCapacity, and PercentChangeInCapacity.

For more information, see Dynamic Scaling in the Auto Scaling User Guide.

" + "documentation":"

The adjustment type. The valid values are ChangeInCapacity, ExactCapacity, and PercentChangeInCapacity.

This parameter is supported if the policy type is SimpleScaling or StepScaling.

For more information, see Dynamic Scaling in the Auto Scaling User Guide.

" }, "MinAdjustmentStep":{ "shape":"MinAdjustmentStep", @@ -2672,7 +2815,7 @@ }, "MinAdjustmentMagnitude":{ "shape":"MinAdjustmentMagnitude", - "documentation":"

The minimum number of instances to scale. If the value of AdjustmentType is PercentChangeInCapacity, the scaling policy changes the DesiredCapacity of the Auto Scaling group by at least this many instances. Otherwise, the error is ValidationError.

" + "documentation":"

The minimum number of instances to scale. If the value of AdjustmentType is PercentChangeInCapacity, the scaling policy changes the DesiredCapacity of the Auto Scaling group by at least this many instances. Otherwise, the error is ValidationError.

This parameter is supported if the policy type is SimpleScaling or StepScaling.

" }, "ScalingAdjustment":{ "shape":"PolicyIncrement", @@ -2680,11 +2823,11 @@ }, "Cooldown":{ "shape":"Cooldown", - "documentation":"

The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start. If this parameter is not specified, the default cooldown period for the group applies.

This parameter is not supported unless the policy type is SimpleScaling.

For more information, see Auto Scaling Cooldowns in the Auto Scaling User Guide.

" + "documentation":"

The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start. If this parameter is not specified, the default cooldown period for the group applies.

This parameter is supported if the policy type is SimpleScaling.

For more information, see Auto Scaling Cooldowns in the Auto Scaling User Guide.

" }, "MetricAggregationType":{ "shape":"XmlStringMaxLen32", - "documentation":"

The aggregation type for the CloudWatch metrics. Valid values are Minimum, Maximum, and Average. If the aggregation type is null, the value is treated as Average.

This parameter is not supported if the policy type is SimpleScaling.

" + "documentation":"

The aggregation type for the CloudWatch metrics. The valid values are Minimum, Maximum, and Average. If the aggregation type is null, the value is treated as Average.

This parameter is supported if the policy type is StepScaling.

" }, "StepAdjustments":{ "shape":"StepAdjustments", @@ -2692,7 +2835,11 @@ }, "EstimatedInstanceWarmup":{ "shape":"EstimatedInstanceWarmup", - "documentation":"

The estimated time, in seconds, until a newly launched instance can contribute to the CloudWatch metrics. The default is to use the value specified for the default cooldown period for the group.

This parameter is not supported if the policy type is SimpleScaling.

" + "documentation":"

The estimated time, in seconds, until a newly launched instance can contribute to the CloudWatch metrics. The default is to use the value specified for the default cooldown period for the group.

This parameter is supported if the policy type is StepScaling or TargetTrackingScaling.

" + }, + "TargetTrackingConfiguration":{ + "shape":"TargetTrackingConfiguration", + "documentation":"

A target tracking policy.

This parameter is required if the policy type is TargetTrackingScaling and not supported otherwise.

" } } }, @@ -2900,6 +3047,10 @@ "Alarms":{ "shape":"Alarms", "documentation":"

The CloudWatch alarms related to the policy.

" + }, + "TargetTrackingConfiguration":{ + "shape":"TargetTrackingConfiguration", + "documentation":"

A target tracking policy.

" } }, "documentation":"

Describes a scaling policy.

" @@ -3196,6 +3347,29 @@ "type":"list", "member":{"shape":"XmlStringMaxLen511"} }, + "TargetTrackingConfiguration":{ + "type":"structure", + "required":["TargetValue"], + "members":{ + "PredefinedMetricSpecification":{ + "shape":"PredefinedMetricSpecification", + "documentation":"

A predefined metric. You can specify either a predefined metric or a customized metric.

" + }, + "CustomizedMetricSpecification":{ + "shape":"CustomizedMetricSpecification", + "documentation":"

A customized metric.

" + }, + "TargetValue":{ + "shape":"MetricScale", + "documentation":"

The target value for the metric.

" + }, + "DisableScaleIn":{ + "shape":"DisableScaleIn", + "documentation":"

Indicates whether scale in by the target tracking policy is disabled. If the value is true, scale in is disabled and the target tracking policy won't remove instances from the Auto Scaling group. Otherwise, scale in is enabled and the target tracking policy can remove instances from the Auto Scaling group. The default value is false.

" + } + }, + "documentation":"

Represents a target tracking policy configuration.

" + }, "TerminateInstanceInAutoScalingGroupType":{ "type":"structure", "required":[ diff --git a/services/batch/src/main/resources/codegen-resources/examples-1.json b/services/batch/src/main/resources/codegen-resources/examples-1.json index ddaaf42d13d5..68001e3c6bb3 100644 --- a/services/batch/src/main/resources/codegen-resources/examples-1.json +++ b/services/batch/src/main/resources/codegen-resources/examples-1.json @@ -124,7 +124,7 @@ } ], "jobQueueName": "LowPriority", - "priority": 10, + "priority": 1, "state": "ENABLED" }, "output": { @@ -154,7 +154,7 @@ } ], "jobQueueName": "HighPriority", - "priority": 1, + "priority": 10, "state": "ENABLED" }, "output": { diff --git a/services/batch/src/main/resources/codegen-resources/service-2.json b/services/batch/src/main/resources/codegen-resources/service-2.json index 48ac970bc744..6b08149c1f26 100644 --- a/services/batch/src/main/resources/codegen-resources/service-2.json +++ b/services/batch/src/main/resources/codegen-resources/service-2.json @@ -23,7 +23,7 @@ {"shape":"ClientException"}, {"shape":"ServerException"} ], - "documentation":"

Cancels jobs in an AWS Batch job queue. Jobs that are in the SUBMITTED, PENDING, or RUNNABLE state are cancelled. Jobs that have progressed to STARTING or RUNNING are not cancelled (but the API operation still succeeds, even if no jobs are cancelled); these jobs must be terminated with the TerminateJob operation.

" + "documentation":"

Cancels a job in an AWS Batch job queue. Jobs that are in the SUBMITTED, PENDING, or RUNNABLE state are cancelled. Jobs that have progressed to STARTING or RUNNING are not cancelled (but the API operation still succeeds, even if no job is cancelled); these jobs must be terminated with the TerminateJob operation.

" }, "CreateComputeEnvironment":{ "name":"CreateComputeEnvironment", @@ -37,7 +37,7 @@ {"shape":"ClientException"}, {"shape":"ServerException"} ], - "documentation":"

Creates an AWS Batch compute environment. You can create MANAGED or UNMANAGED compute environments.

In a managed compute environment, AWS Batch manages the compute resources within the environment, based on the compute resources that you specify. Instances launched into a managed compute environment use the latest Amazon ECS-optimized AMI. You can choose to use Amazon EC2 On-Demand instances in your managed compute environment, or you can use Amazon EC2 Spot instances that only launch when the Spot bid price is below a specified percentage of the On-Demand price.

In an unmanaged compute environment, you can manage your own compute resources. This provides more compute resource configuration options, such as using a custom AMI, but you must ensure that your AMI meets the Amazon ECS container instance AMI specification. For more information, see Container Instance AMIs in the Amazon EC2 Container Service Developer Guide. After you have created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that is associated with it and then manually launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS Container Instance in the Amazon EC2 Container Service Developer Guide.

" + "documentation":"

Creates an AWS Batch compute environment. You can create MANAGED or UNMANAGED compute environments.

In a managed compute environment, AWS Batch manages the compute resources within the environment, based on the compute resources that you specify. Instances launched into a managed compute environment use a recent, approved version of the Amazon ECS-optimized AMI. You can choose to use Amazon EC2 On-Demand instances in your managed compute environment, or you can use Amazon EC2 Spot instances that only launch when the Spot bid price is below a specified percentage of the On-Demand price.

In an unmanaged compute environment, you can manage your own compute resources. This provides more compute resource configuration options, such as using a custom AMI, but you must ensure that your AMI meets the Amazon ECS container instance AMI specification. For more information, see Container Instance AMIs in the Amazon EC2 Container Service Developer Guide. After you have created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that is associated with it and then manually launch your container instances into that Amazon ECS cluster. For more information, see Launching an Amazon ECS Container Instance in the Amazon EC2 Container Service Developer Guide.

" }, "CreateJobQueue":{ "name":"CreateJobQueue", @@ -79,7 +79,7 @@ {"shape":"ClientException"}, {"shape":"ServerException"} ], - "documentation":"

Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation and terminate any jobs that have not completed with the TerminateJob.

It is not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue request.

" + "documentation":"

Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are terminated when you delete a job queue.

It is not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue request.

" }, "DeregisterJobDefinition":{ "name":"DeregisterJobDefinition", @@ -163,7 +163,7 @@ {"shape":"ClientException"}, {"shape":"ServerException"} ], - "documentation":"

Returns a list of task jobs for a specified job queue. You can filter the results by job status with the jobStatus parameter.

" + "documentation":"

Returns a list of task jobs for a specified job queue. You can filter the results by job status with the jobStatus parameter. If you do not specify a status, only RUNNING jobs are returned.

" }, "RegisterJobDefinition":{ "name":"RegisterJobDefinition", @@ -205,7 +205,7 @@ {"shape":"ClientException"}, {"shape":"ServerException"} ], - "documentation":"

Terminates jobs in a job queue. Jobs that are in the STARTING or RUNNING state are terminated, which causes them to transition to FAILED. Jobs that have not progressed to the STARTING state are cancelled.

" + "documentation":"

Terminates a job in a job queue. Jobs that are in the STARTING or RUNNING state are terminated, which causes them to transition to FAILED. Jobs that have not progressed to the STARTING state are cancelled.

" }, "UpdateComputeEnvironment":{ "name":"UpdateComputeEnvironment", @@ -246,7 +246,7 @@ }, "taskArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the job attempt.

" + "documentation":"

The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the job attempt. Each container attempt receives a task ARN when they reach the STARTING status.

" }, "exitCode":{ "shape":"Integer", @@ -256,7 +256,10 @@ "shape":"String", "documentation":"

A short (255 max characters) human-readable string to provide additional details about a running or stopped container.

" }, - "logStreamName":{"shape":"String"} + "logStreamName":{ + "shape":"String", + "documentation":"

The name of the CloudWatch Logs log stream associated with the container. The log group for AWS Batch jobs is /aws/batch/job. Each container attempt receives a log stream name when they reach the RUNNING status.

" + } }, "documentation":"

An object representing the details of a container that is part of a job attempt.

" }, @@ -328,7 +331,7 @@ "members":{ "jobId":{ "shape":"String", - "documentation":"

A list of up to 100 job IDs to cancel.

" + "documentation":"

The AWS Batch job ID of the job to cancel.

" }, "reason":{ "shape":"String", @@ -453,7 +456,7 @@ }, "instanceTypes":{ "shape":"StringList", - "documentation":"

The instances types that may launched.

" + "documentation":"

The instances types that may be launched. You can specify instance families to launch any instance type within those families (for example, c4 or p3), or you can specify specific sizes within a family (such as c4.8xlarge). You can also choose optimal to pick instance types (from the latest C, M, and R instance families) on the fly that match the demand of your job queues.

" }, "imageId":{ "shape":"String", @@ -473,7 +476,7 @@ }, "instanceRole":{ "shape":"String", - "documentation":"

The Amazon ECS instance role applied to Amazon EC2 instances in a compute environment.

" + "documentation":"

The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment. You can specify the short name or full Amazon Resource Name (ARN) of an instance profile. For example, ecsInstanceRole or arn:aws:iam::<aws_account_id>:instance-profile/ecsInstanceRole. For more information, see Amazon ECS Instance Role in the AWS Batch User Guide.

" }, "tags":{ "shape":"TagsMap", @@ -537,7 +540,7 @@ }, "environment":{ "shape":"EnvironmentVariables", - "documentation":"

The environment variables to pass to a container.

" + "documentation":"

The environment variables to pass to a container.

Environment variables must not start with AWS_BATCH; this naming convention is reserved for variables that are set by the AWS Batch service.

" }, "mountPoints":{ "shape":"MountPoints", @@ -573,9 +576,12 @@ }, "taskArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the container job.

" + "documentation":"

The Amazon Resource Name (ARN) of the Amazon ECS task that is associated with the container job. Each container attempt receives a task ARN when they reach the STARTING status.

" }, - "logStreamName":{"shape":"String"} + "logStreamName":{ + "shape":"String", + "documentation":"

The name of the CloudWatch Logs log stream associated with the container. The log group for AWS Batch jobs is /aws/batch/job. Each container attempt receives a log stream name when they reach the RUNNING status.

" + } }, "documentation":"

An object representing the details of a container that is part of a job.

" }, @@ -596,7 +602,7 @@ }, "environment":{ "shape":"EnvironmentVariables", - "documentation":"

The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the job definition.

" + "documentation":"

The environment variables to send to the container. You can add new environment variables, which are added to the container at launch, or you can override the existing environment variables from the Docker image or the job definition.

Environment variables must not start with AWS_BATCH; this naming convention is reserved for variables that are set by the AWS Batch service.

" } }, "documentation":"

The overrides that should be sent to a container.

" @@ -615,11 +621,11 @@ }, "vcpus":{ "shape":"Integer", - "documentation":"

The number of vCPUs reserved for the container. This parameter maps to CpuShares in the Create a container section of the Docker Remote API and the --cpu-shares option to docker run. Each vCPU is equivalent to 1,024 CPU shares.

" + "documentation":"

The number of vCPUs reserved for the container. This parameter maps to CpuShares in the Create a container section of the Docker Remote API and the --cpu-shares option to docker run. Each vCPU is equivalent to 1,024 CPU shares. You must specify at least 1 vCPU.

" }, "memory":{ "shape":"Integer", - "documentation":"

The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory in the Create a container section of the Docker Remote API and the --memory option to docker run.

" + "documentation":"

The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory in the Create a container section of the Docker Remote API and the --memory option to docker run. You must specify at least 4 MiB of memory for a job.

" }, "command":{ "shape":"StringList", @@ -635,7 +641,7 @@ }, "environment":{ "shape":"EnvironmentVariables", - "documentation":"

The environment variables to pass to a container. This parameter maps to Env in the Create a container section of the Docker Remote API and the --env option to docker run.

We do not recommend using plain text environment variables for sensitive information, such as credential data.

" + "documentation":"

The environment variables to pass to a container. This parameter maps to Env in the Create a container section of the Docker Remote API and the --env option to docker run.

We do not recommend using plain text environment variables for sensitive information, such as credential data.

Environment variables must not start with AWS_BATCH; this naming convention is reserved for variables that are set by the AWS Batch service.

" }, "mountPoints":{ "shape":"MountPoints", @@ -670,7 +676,7 @@ "members":{ "computeEnvironmentName":{ "shape":"String", - "documentation":"

The name for your compute environment. Up to 128 letters (uppercase and lowercase), numbers, and underscores are allowed.

" + "documentation":"

The name for your compute environment. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.

" }, "type":{ "shape":"CEType", @@ -686,7 +692,7 @@ }, "serviceRole":{ "shape":"String", - "documentation":"

The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.

" + "documentation":"

The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.

If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.

Depending on how you created your AWS Batch service role, its ARN may contain the service-role path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN does not use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.

" } } }, @@ -721,7 +727,7 @@ }, "priority":{ "shape":"Integer", - "documentation":"

The priority of the job queue. Job queues with a higher priority (or a lower integer value for the priority parameter) are evaluated first when associated with same compute environment. Priority is determined in ascending order, for example, a job queue with a priority value of 1 is given scheduling preference over a job queue with a priority value of 10.

" + "documentation":"

The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with same compute environment. Priority is determined in descending order, for example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.

" }, "computeEnvironmentOrder":{ "shape":"ComputeEnvironmentOrders", @@ -1184,7 +1190,7 @@ }, "jobStatus":{ "shape":"JobStatus", - "documentation":"

The job status with which to filter jobs in the specified queue.

" + "documentation":"

The job status with which to filter jobs in the specified queue. If you do not specify a status, only RUNNING jobs are returned.

" }, "maxResults":{ "shape":"Integer", @@ -1247,7 +1253,7 @@ "members":{ "jobDefinitionName":{ "shape":"String", - "documentation":"

The name of the job definition to register.

" + "documentation":"

The name of the job definition to register. Up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.

" }, "type":{ "shape":"JobDefinitionType", @@ -1277,7 +1283,7 @@ "members":{ "jobDefinitionName":{ "shape":"String", - "documentation":"

The name of the job definition.

" + "documentation":"

The name of the job definition.

" }, "jobDefinitionArn":{ "shape":"String", @@ -1324,7 +1330,7 @@ "members":{ "jobName":{ "shape":"String", - "documentation":"

The name of the job. A name must be 1 to 128 characters in length.

Pattern: ^[a-zA-Z0-9_]+$

" + "documentation":"

The name of the job. The first character must be alphanumeric, and up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.

" }, "jobQueue":{ "shape":"String", @@ -1332,7 +1338,7 @@ }, "dependsOn":{ "shape":"JobDependencyList", - "documentation":"

A list of job IDs on which this job depends. A job can depend upon a maximum of 100 jobs.

" + "documentation":"

A list of job IDs on which this job depends. A job can depend upon a maximum of 20 jobs.

" }, "jobDefinition":{ "shape":"String", @@ -1383,7 +1389,7 @@ "members":{ "jobId":{ "shape":"String", - "documentation":"

Job IDs to be terminated. Up to 100 jobs can be specified.

" + "documentation":"

The AWS Batch job ID of the job to terminate.

" }, "reason":{ "shape":"String", @@ -1441,7 +1447,7 @@ }, "serviceRole":{ "shape":"String", - "documentation":"

The name or full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to ECS, Auto Scaling, and EC2 on your behalf.

" + "documentation":"

The full Amazon Resource Name (ARN) of the IAM role that allows AWS Batch to make calls to other AWS services on your behalf.

If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path.

Depending on how you created your AWS Batch service role, its ARN may contain the service-role path prefix. When you only specify the name of the service role, AWS Batch assumes that your ARN does not use the service-role path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.

" } } }, @@ -1472,7 +1478,7 @@ }, "priority":{ "shape":"Integer", - "documentation":"

The priority of the job queue. Job queues with a higher priority (or a lower integer value for the priority parameter) are evaluated first when associated with same compute environment. Priority is determined in ascending order, for example, a job queue with a priority value of 1 is given scheduling preference over a job queue with a priority value of 10.

" + "documentation":"

The priority of the job queue. Job queues with a higher priority (or a higher integer value for the priority parameter) are evaluated first when associated with same compute environment. Priority is determined in descending order, for example, a job queue with a priority value of 10 is given scheduling preference over a job queue with a priority value of 1.

" }, "computeEnvironmentOrder":{ "shape":"ComputeEnvironmentOrders", diff --git a/services/budgets/src/main/resources/codegen-resources/service-2.json b/services/budgets/src/main/resources/codegen-resources/service-2.json index 9dfcd5eb4e09..ccdb6a6c3a9e 100644 --- a/services/budgets/src/main/resources/codegen-resources/service-2.json +++ b/services/budgets/src/main/resources/codegen-resources/service-2.json @@ -199,7 +199,8 @@ "errors":[ {"shape":"InternalErrorException"}, {"shape":"InvalidParameterException"}, - {"shape":"NotFoundException"} + {"shape":"NotFoundException"}, + {"shape":"DuplicateRecordException"} ], "documentation":"Update the information about a notification already created" }, @@ -214,7 +215,8 @@ "errors":[ {"shape":"InternalErrorException"}, {"shape":"InvalidParameterException"}, - {"shape":"NotFoundException"} + {"shape":"NotFoundException"}, + {"shape":"DuplicateRecordException"} ], "documentation":"Update a subscriber" } @@ -250,16 +252,17 @@ }, "BudgetName":{ "type":"string", - "documentation":"A string represents the budget name. No \":\" character is allowed.", + "documentation":"A string represents the budget name. No \":\" and \"\\\" character is allowed.", "max":100, - "pattern":"[^:]+" + "pattern":"[^:\\\\]+" }, "BudgetType":{ "type":"string", - "documentation":"The type of a budget. Can be COST or USAGE.", + "documentation":"The type of a budget. It should be COST, USAGE, or RI_UTILIZATION.", "enum":[ "USAGE", - "COST" + "COST", + "RI_UTILIZATION" ] }, "Budgets":{ @@ -274,7 +277,7 @@ "ActualSpend":{"shape":"Spend"}, "ForecastedSpend":{"shape":"Spend"} }, - "documentation":"A structure holds the actual and forecasted spend for a budget." + "documentation":"A structure that holds the actual and forecasted spend for a budget." }, "ComparisonOperator":{ "type":"string", @@ -289,7 +292,7 @@ "type":"map", "key":{"shape":"GenericString"}, "value":{"shape":"DimensionValues"}, - "documentation":"A map represents the cost filters applied to the budget." + "documentation":"A map that represents the cost filters applied to the budget." }, "CostTypes":{ "type":"structure", @@ -577,7 +580,7 @@ }, "MaxResults":{ "type":"integer", - "documentation":"An integer to represent how many entries should a pagianted response contains. Maxium is set to 100.", + "documentation":"An integer to represent how many entries a paginated response contains. Maximum is set to 100.", "box":true, "max":100, "min":1 @@ -600,14 +603,15 @@ "members":{ "NotificationType":{"shape":"NotificationType"}, "ComparisonOperator":{"shape":"ComparisonOperator"}, - "Threshold":{"shape":"NotificationThreshold"} + "Threshold":{"shape":"NotificationThreshold"}, + "ThresholdType":{"shape":"ThresholdType"} }, "documentation":"Notification model. Each budget may contain multiple notifications with different settings." }, "NotificationThreshold":{ "type":"double", - "documentation":"The threshold of the a notification. It should be a number between 0 and 100.", - "max":300, + "documentation":"The threshold of a notification. It should be a number between 0 and 1,000,000,000.", + "max":1000000000, "min":0.1 }, "NotificationType":{ @@ -654,9 +658,9 @@ ], "members":{ "Amount":{"shape":"NumericValue"}, - "Unit":{"shape":"GenericString"} + "Unit":{"shape":"UnitValue"} }, - "documentation":"A structure represent either a cost spend or usage spend. Contains an amount and a unit." + "documentation":"A structure that represents either a cost spend or usage spend. Contains an amount and a unit." }, "Subscriber":{ "type":"structure", @@ -685,6 +689,14 @@ "EMAIL" ] }, + "ThresholdType":{ + "type":"string", + "documentation":"The type of threshold for a notification. It can be PERCENTAGE or ABSOLUTE_VALUE.", + "enum":[ + "PERCENTAGE", + "ABSOLUTE_VALUE" + ] + }, "TimePeriod":{ "type":"structure", "required":[ @@ -695,17 +707,23 @@ "Start":{"shape":"GenericTimestamp"}, "End":{"shape":"GenericTimestamp"} }, - "documentation":"A time period indicated the start date and end date of a budget." + "documentation":"A time period indicating the start date and end date of a budget." }, "TimeUnit":{ "type":"string", - "documentation":"The time unit of the budget. e.g. weekly, monthly, etc.", + "documentation":"The time unit of the budget. e.g. MONTHLY, QUARTERLY, etc.", "enum":[ + "DAILY", "MONTHLY", "QUARTERLY", "ANNUALLY" ] }, + "UnitValue":{ + "type":"string", + "documentation":"A string to represent budget spend unit. It should be not null and not empty.", + "min":1 + }, "UpdateBudgetRequest":{ "type":"structure", "required":[ diff --git a/services/clouddirectory/src/main/resources/codegen-resources/service-2.json b/services/clouddirectory/src/main/resources/codegen-resources/service-2.json index 0b1c1a2c2f9b..358646984839 100644 --- a/services/clouddirectory/src/main/resources/codegen-resources/service-2.json +++ b/services/clouddirectory/src/main/resources/codegen-resources/service-2.json @@ -141,6 +141,7 @@ {"shape":"ValidationException"}, {"shape":"LimitExceededException"}, {"shape":"AccessDeniedException"}, + {"shape":"DirectoryNotEnabledException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidAttachmentException"}, {"shape":"ValidationException"}, @@ -517,6 +518,7 @@ {"shape":"ValidationException"}, {"shape":"LimitExceededException"}, {"shape":"AccessDeniedException"}, + {"shape":"DirectoryNotEnabledException"}, {"shape":"ResourceNotFoundException"}, {"shape":"FacetValidationException"} ], @@ -810,6 +812,7 @@ {"shape":"ValidationException"}, {"shape":"LimitExceededException"}, {"shape":"AccessDeniedException"}, + {"shape":"DirectoryNotEnabledException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidNextTokenException"}, {"shape":"FacetValidationException"} @@ -970,6 +973,7 @@ {"shape":"ValidationException"}, {"shape":"LimitExceededException"}, {"shape":"AccessDeniedException"}, + {"shape":"DirectoryNotEnabledException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidNextTokenException"}, {"shape":"FacetValidationException"} @@ -1657,7 +1661,7 @@ "documentation":"

The name of the link.

" } }, - "documentation":"

Represents the output of an AttachObject operation.

" + "documentation":"

Represents the output of an AttachObject operation.

" }, "BatchAttachObjectResponse":{ "type":"structure", @@ -1667,7 +1671,137 @@ "documentation":"

The ObjectIdentifier of the object that has been attached.

" } }, - "documentation":"

Represents the output batch AttachObject response operation.

" + "documentation":"

Represents the output batch AttachObject response operation.

" + }, + "BatchAttachPolicy":{ + "type":"structure", + "required":[ + "PolicyReference", + "ObjectReference" + ], + "members":{ + "PolicyReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that is associated with the policy object.

" + }, + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the object to which the policy will be attached.

" + } + }, + "documentation":"

Attaches a policy object to a regular object inside a BatchRead operation. For more information, see AttachPolicy and BatchReadRequest$Operations.

" + }, + "BatchAttachPolicyResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

Represents the output of an AttachPolicy response operation.

" + }, + "BatchAttachToIndex":{ + "type":"structure", + "required":[ + "IndexReference", + "TargetReference" + ], + "members":{ + "IndexReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the index that you are attaching the object to.

" + }, + "TargetReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the object that you are attaching to the index.

" + } + }, + "documentation":"

Attaches the specified object to the specified index inside a BatchRead operation. For more information, see AttachToIndex and BatchReadRequest$Operations.

" + }, + "BatchAttachToIndexResponse":{ + "type":"structure", + "members":{ + "AttachedObjectIdentifier":{ + "shape":"ObjectIdentifier", + "documentation":"

The ObjectIdentifier of the object that was attached to the index.

" + } + }, + "documentation":"

Represents the output of a AttachToIndex response operation.

" + }, + "BatchAttachTypedLink":{ + "type":"structure", + "required":[ + "SourceObjectReference", + "TargetObjectReference", + "TypedLinkFacet", + "Attributes" + ], + "members":{ + "SourceObjectReference":{ + "shape":"ObjectReference", + "documentation":"

Identifies the source object that the typed link will attach to.

" + }, + "TargetObjectReference":{ + "shape":"ObjectReference", + "documentation":"

Identifies the target object that the typed link will attach to.

" + }, + "TypedLinkFacet":{ + "shape":"TypedLinkSchemaAndFacetName", + "documentation":"

Identifies the typed link facet that is associated with the typed link.

" + }, + "Attributes":{ + "shape":"AttributeNameAndValueList", + "documentation":"

A set of attributes that are associated with the typed link.

" + } + }, + "documentation":"

Attaches a typed link to a specified source and target object inside a BatchRead operation. For more information, see AttachTypedLink and BatchReadRequest$Operations.

" + }, + "BatchAttachTypedLinkResponse":{ + "type":"structure", + "members":{ + "TypedLinkSpecifier":{ + "shape":"TypedLinkSpecifier", + "documentation":"

Returns a typed link specifier as output.

" + } + }, + "documentation":"

Represents the output of a AttachTypedLink response operation.

" + }, + "BatchCreateIndex":{ + "type":"structure", + "required":[ + "OrderedIndexedAttributeList", + "IsUnique" + ], + "members":{ + "OrderedIndexedAttributeList":{ + "shape":"AttributeKeyList", + "documentation":"

Specifies the attributes that should be indexed on. Currently only a single attribute is supported.

" + }, + "IsUnique":{ + "shape":"Bool", + "documentation":"

Indicates whether the attribute that is being indexed has unique values or not.

" + }, + "ParentReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the parent object that contains the index object.

" + }, + "LinkName":{ + "shape":"LinkName", + "documentation":"

The name of the link between the parent object and the index object.

" + }, + "BatchReferenceName":{ + "shape":"BatchReferenceName", + "documentation":"

The batch reference name. See Batches for more information.

" + } + }, + "documentation":"

Creates an index object inside of a BatchRead operation. For more information, see CreateIndex and BatchReadRequest$Operations.

" + }, + "BatchCreateIndexResponse":{ + "type":"structure", + "members":{ + "ObjectIdentifier":{ + "shape":"ObjectIdentifier", + "documentation":"

The ObjectIdentifier of the index created by this operation.

" + } + }, + "documentation":"

Represents the output of a CreateIndex response operation.

" }, "BatchCreateObject":{ "type":"structure", @@ -1700,7 +1834,7 @@ "documentation":"

The batch reference name. See Batches for more information.

" } }, - "documentation":"

Represents the output of a CreateObject operation.

" + "documentation":"

Represents the output of a CreateObject operation.

" }, "BatchCreateObjectResponse":{ "type":"structure", @@ -1710,7 +1844,7 @@ "documentation":"

The ID that is associated with the object.

" } }, - "documentation":"

Represents the output of a CreateObject response operation.

" + "documentation":"

Represents the output of a CreateObject response operation.

" }, "BatchDeleteObject":{ "type":"structure", @@ -1721,13 +1855,41 @@ "documentation":"

The reference that identifies the object.

" } }, - "documentation":"

Represents the output of a DeleteObject operation.

" + "documentation":"

Represents the output of a DeleteObject operation.

" }, "BatchDeleteObjectResponse":{ "type":"structure", "members":{ }, - "documentation":"

Represents the output of a DeleteObject response operation.

" + "documentation":"

Represents the output of a DeleteObject response operation.

" + }, + "BatchDetachFromIndex":{ + "type":"structure", + "required":[ + "IndexReference", + "TargetReference" + ], + "members":{ + "IndexReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the index object.

" + }, + "TargetReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the object being detached from the index.

" + } + }, + "documentation":"

Detaches the specified object from the specified index inside a BatchRead operation. For more information, see DetachFromIndex and BatchReadRequest$Operations.

" + }, + "BatchDetachFromIndexResponse":{ + "type":"structure", + "members":{ + "DetachedObjectIdentifier":{ + "shape":"ObjectIdentifier", + "documentation":"

The ObjectIdentifier of the object that was detached from the index.

" + } + }, + "documentation":"

Represents the output of a DetachFromIndex response operation.

" }, "BatchDetachObject":{ "type":"structure", @@ -1750,7 +1912,7 @@ "documentation":"

The batch reference name. See Batches for more information.

" } }, - "documentation":"

Represents the output of a DetachObject operation.

" + "documentation":"

Represents the output of a DetachObject operation.

" }, "BatchDetachObjectResponse":{ "type":"structure", @@ -1760,7 +1922,184 @@ "documentation":"

The ObjectIdentifier of the detached object.

" } }, - "documentation":"

Represents the output of a DetachObject response operation.

" + "documentation":"

Represents the output of a DetachObject response operation.

" + }, + "BatchDetachPolicy":{ + "type":"structure", + "required":[ + "PolicyReference", + "ObjectReference" + ], + "members":{ + "PolicyReference":{ + "shape":"ObjectReference", + "documentation":"

Reference that identifies the policy object.

" + }, + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

Reference that identifies the object whose policy object will be detached.

" + } + }, + "documentation":"

Detaches the specified policy from the specified directory inside a BatchRead operation. For more information, see DetachPolicy and BatchReadRequest$Operations.

" + }, + "BatchDetachPolicyResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

Represents the output of a DetachPolicy response operation.

" + }, + "BatchDetachTypedLink":{ + "type":"structure", + "required":["TypedLinkSpecifier"], + "members":{ + "TypedLinkSpecifier":{ + "shape":"TypedLinkSpecifier", + "documentation":"

Used to accept a typed link specifier as input.

" + } + }, + "documentation":"

Detaches a typed link from a specified source and target object inside a BatchRead operation. For more information, see DetachTypedLink and BatchReadRequest$Operations.

" + }, + "BatchDetachTypedLinkResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

Represents the output of a DetachTypedLink response operation.

" + }, + "BatchGetObjectInformation":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the object.

" + } + }, + "documentation":"

Retrieves metadata about an object inside a BatchRead operation. For more information, see GetObjectInformation and BatchReadRequest$Operations.

" + }, + "BatchGetObjectInformationResponse":{ + "type":"structure", + "members":{ + "SchemaFacets":{ + "shape":"SchemaFacetList", + "documentation":"

The facets attached to the specified object.

" + }, + "ObjectIdentifier":{ + "shape":"ObjectIdentifier", + "documentation":"

The ObjectIdentifier of the specified object.

" + } + }, + "documentation":"

Represents the output of a GetObjectInformation response operation.

" + }, + "BatchListAttachedIndices":{ + "type":"structure", + "required":["TargetReference"], + "members":{ + "TargetReference":{ + "shape":"ObjectReference", + "documentation":"

A reference to the object that has indices attached.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Lists indices attached to an object inside a BatchRead operation. For more information, see ListAttachedIndices and BatchReadRequest$Operations.

" + }, + "BatchListAttachedIndicesResponse":{ + "type":"structure", + "members":{ + "IndexAttachments":{ + "shape":"IndexAttachmentList", + "documentation":"

The indices attached to the specified object.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListAttachedIndices response operation.

" + }, + "BatchListIncomingTypedLinks":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the object whose attributes will be listed.

" + }, + "FilterAttributeRanges":{ + "shape":"TypedLinkAttributeRangeList", + "documentation":"

Provides range filters for multiple attributes. When providing ranges to typed link selection, any inexact ranges must be specified at the end. Any attributes that do not have a range specified are presumed to match the entire range.

" + }, + "FilterTypedLink":{ + "shape":"TypedLinkSchemaAndFacetName", + "documentation":"

Filters are interpreted in the order of the attributes on the typed link facet, not the order in which they are supplied to any API calls.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Returns a paginated list of all the incoming TypedLinkSpecifier information for an object inside a BatchRead operation. For more information, see ListIncomingTypedLinks and BatchReadRequest$Operations.

" + }, + "BatchListIncomingTypedLinksResponse":{ + "type":"structure", + "members":{ + "LinkSpecifiers":{ + "shape":"TypedLinkSpecifierList", + "documentation":"

Returns one or more typed link specifiers as output.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListIncomingTypedLinks response operation.

" + }, + "BatchListIndex":{ + "type":"structure", + "required":["IndexReference"], + "members":{ + "RangesOnIndexedValues":{ + "shape":"ObjectAttributeRangeList", + "documentation":"

Specifies the ranges of indexed values that you want to query.

" + }, + "IndexReference":{ + "shape":"ObjectReference", + "documentation":"

The reference to the index to list.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Lists objects attached to the specified index inside a BatchRead operation. For more information, see ListIndex and BatchReadRequest$Operations.

" + }, + "BatchListIndexResponse":{ + "type":"structure", + "members":{ + "IndexAttachments":{ + "shape":"IndexAttachmentList", + "documentation":"

The objects and indexed values attached to the index.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListIndex response operation.

" }, "BatchListObjectAttributes":{ "type":"structure", @@ -1783,7 +2122,7 @@ "documentation":"

Used to filter the list of object attributes that are associated with a certain facet.

" } }, - "documentation":"

Represents the output of a ListObjectAttributes operation.

" + "documentation":"

Represents the output of a ListObjectAttributes operation.

" }, "BatchListObjectAttributesResponse":{ "type":"structure", @@ -1797,7 +2136,7 @@ "documentation":"

The pagination token.

" } }, - "documentation":"

Represents the output of a ListObjectAttributes response operation.

" + "documentation":"

Represents the output of a ListObjectAttributes response operation.

" }, "BatchListObjectChildren":{ "type":"structure", @@ -1816,7 +2155,7 @@ "documentation":"

Maximum number of items to be retrieved in a single call. This is an approximate number.

" } }, - "documentation":"

Represents the output of a ListObjectChildren operation.

" + "documentation":"

Represents the output of a ListObjectChildren operation.

" }, "BatchListObjectChildrenResponse":{ "type":"structure", @@ -1830,7 +2169,180 @@ "documentation":"

The pagination token.

" } }, - "documentation":"

Represents the output of a ListObjectChildren response operation.

" + "documentation":"

Represents the output of a ListObjectChildren response operation.

" + }, + "BatchListObjectParentPaths":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the object whose attributes will be listed.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Retrieves all available parent paths for any object type such as node, leaf node, policy node, and index node objects inside a BatchRead operation. For more information, see ListObjectParentPaths and BatchReadRequest$Operations.

" + }, + "BatchListObjectParentPathsResponse":{ + "type":"structure", + "members":{ + "PathToObjectIdentifiersList":{ + "shape":"PathToObjectIdentifiersList", + "documentation":"

Returns the path to the ObjectIdentifiers that are associated with the directory.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListObjectParentPaths response operation.

" + }, + "BatchListObjectPolicies":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the object whose attributes will be listed.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Returns policies attached to an object in pagination fashion inside a BatchRead operation. For more information, see ListObjectPolicies and BatchReadRequest$Operations.

" + }, + "BatchListObjectPoliciesResponse":{ + "type":"structure", + "members":{ + "AttachedPolicyIds":{ + "shape":"ObjectIdentifierList", + "documentation":"

A list of policy ObjectIdentifiers, that are attached to the object.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListObjectPolicies response operation.

" + }, + "BatchListOutgoingTypedLinks":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the object whose attributes will be listed.

" + }, + "FilterAttributeRanges":{ + "shape":"TypedLinkAttributeRangeList", + "documentation":"

Provides range filters for multiple attributes. When providing ranges to typed link selection, any inexact ranges must be specified at the end. Any attributes that do not have a range specified are presumed to match the entire range.

" + }, + "FilterTypedLink":{ + "shape":"TypedLinkSchemaAndFacetName", + "documentation":"

Filters are interpreted in the order of the attributes defined on the typed link facet, not the order they are supplied to any API calls.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Returns a paginated list of all the outgoing TypedLinkSpecifier information for an object inside a BatchRead operation. For more information, see ListOutgoingTypedLinks and BatchReadRequest$Operations.

" + }, + "BatchListOutgoingTypedLinksResponse":{ + "type":"structure", + "members":{ + "TypedLinkSpecifiers":{ + "shape":"TypedLinkSpecifierList", + "documentation":"

Returns a typed link specifier as output.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListOutgoingTypedLinks response operation.

" + }, + "BatchListPolicyAttachments":{ + "type":"structure", + "required":["PolicyReference"], + "members":{ + "PolicyReference":{ + "shape":"ObjectReference", + "documentation":"

The reference that identifies the policy object.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Returns all of the ObjectIdentifiers to which a given policy is attached inside a BatchRead operation. For more information, see ListPolicyAttachments and BatchReadRequest$Operations.

" + }, + "BatchListPolicyAttachmentsResponse":{ + "type":"structure", + "members":{ + "ObjectIdentifiers":{ + "shape":"ObjectIdentifierList", + "documentation":"

A list of ObjectIdentifiers to which the policy is attached.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a ListPolicyAttachments response operation.

" + }, + "BatchLookupPolicy":{ + "type":"structure", + "required":["ObjectReference"], + "members":{ + "ObjectReference":{ + "shape":"ObjectReference", + "documentation":"

Reference that identifies the object whose policies will be looked up.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + }, + "MaxResults":{ + "shape":"NumberResults", + "documentation":"

The maximum number of results to retrieve.

" + } + }, + "documentation":"

Lists all policies from the root of the Directory to the object specified inside a BatchRead operation. For more information, see LookupPolicy and BatchReadRequest$Operations.

" + }, + "BatchLookupPolicyResponse":{ + "type":"structure", + "members":{ + "PolicyToPathList":{ + "shape":"PolicyToPathList", + "documentation":"

Provides list of path to policies. Policies contain PolicyId, ObjectIdentifier, and PolicyType. For more information, see Policies.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The pagination token.

" + } + }, + "documentation":"

Represents the output of a LookupPolicy response operation.

" }, "BatchOperationIndex":{"type":"integer"}, "BatchReadException":{ @@ -1855,7 +2367,14 @@ "ResourceNotFoundException", "InvalidNextTokenException", "AccessDeniedException", - "NotNodeException" + "NotNodeException", + "FacetValidationException", + "CannotListParentOfRootException", + "NotIndexException", + "NotPolicyException", + "DirectoryNotEnabledException", + "LimitExceededException", + "InternalServiceException" ] }, "BatchReadOperation":{ @@ -1868,6 +2387,42 @@ "ListObjectChildren":{ "shape":"BatchListObjectChildren", "documentation":"

Returns a paginated list of child objects that are associated with a given object.

" + }, + "ListAttachedIndices":{ + "shape":"BatchListAttachedIndices", + "documentation":"

Lists indices attached to an object.

" + }, + "ListObjectParentPaths":{ + "shape":"BatchListObjectParentPaths", + "documentation":"

Retrieves all available parent paths for any object type such as node, leaf node, policy node, and index node objects. For more information about objects, see Directory Structure.

" + }, + "GetObjectInformation":{ + "shape":"BatchGetObjectInformation", + "documentation":"

Retrieves metadata about an object.

" + }, + "ListObjectPolicies":{ + "shape":"BatchListObjectPolicies", + "documentation":"

Returns policies attached to an object in pagination fashion.

" + }, + "ListPolicyAttachments":{ + "shape":"BatchListPolicyAttachments", + "documentation":"

Returns all of the ObjectIdentifiers to which a given policy is attached.

" + }, + "LookupPolicy":{ + "shape":"BatchLookupPolicy", + "documentation":"

Lists all policies from the root of the Directory to the object specified. If there are no policies present, an empty list is returned. If policies are present, and if some objects don't have the policies attached, it returns the ObjectIdentifier for such objects. If policies are present, it returns ObjectIdentifier, policyId, and policyType. Paths that don't lead to the root from the target object are ignored. For more information, see Policies.

" + }, + "ListIndex":{ + "shape":"BatchListIndex", + "documentation":"

Lists objects attached to the specified index.

" + }, + "ListOutgoingTypedLinks":{ + "shape":"BatchListOutgoingTypedLinks", + "documentation":"

Returns a paginated list of all the outgoing TypedLinkSpecifier information for an object. It also supports filtering by typed link facet and identity attributes. For more information, see Typed link.

" + }, + "ListIncomingTypedLinks":{ + "shape":"BatchListIncomingTypedLinks", + "documentation":"

Returns a paginated list of all the incoming TypedLinkSpecifier information for an object. It also supports filtering by typed link facet and identity attributes. For more information, see Typed link.

" } }, "documentation":"

Represents the output of a BatchRead operation.

" @@ -1938,6 +2493,42 @@ "ListObjectChildren":{ "shape":"BatchListObjectChildrenResponse", "documentation":"

Returns a paginated list of child objects that are associated with a given object.

" + }, + "GetObjectInformation":{ + "shape":"BatchGetObjectInformationResponse", + "documentation":"

Retrieves metadata about an object.

" + }, + "ListAttachedIndices":{ + "shape":"BatchListAttachedIndicesResponse", + "documentation":"

Lists indices attached to an object.

" + }, + "ListObjectParentPaths":{ + "shape":"BatchListObjectParentPathsResponse", + "documentation":"

Retrieves all available parent paths for any object type such as node, leaf node, policy node, and index node objects. For more information about objects, see Directory Structure.

" + }, + "ListObjectPolicies":{ + "shape":"BatchListObjectPoliciesResponse", + "documentation":"

Returns policies attached to an object in pagination fashion.

" + }, + "ListPolicyAttachments":{ + "shape":"BatchListPolicyAttachmentsResponse", + "documentation":"

Returns all of the ObjectIdentifiers to which a given policy is attached.

" + }, + "LookupPolicy":{ + "shape":"BatchLookupPolicyResponse", + "documentation":"

Lists all policies from the root of the Directory to the object specified. If there are no policies present, an empty list is returned. If policies are present, and if some objects don't have the policies attached, it returns the ObjectIdentifier for such objects. If policies are present, it returns ObjectIdentifier, policyId, and policyType. Paths that don't lead to the root from the target object are ignored. For more information, see Policies.

" + }, + "ListIndex":{ + "shape":"BatchListIndexResponse", + "documentation":"

Lists objects attached to the specified index.

" + }, + "ListOutgoingTypedLinks":{ + "shape":"BatchListOutgoingTypedLinksResponse", + "documentation":"

Returns a paginated list of all the outgoing TypedLinkSpecifier information for an object. It also supports filtering by typed link facet and identity attributes. For more information, see Typed link.

" + }, + "ListIncomingTypedLinks":{ + "shape":"BatchListIncomingTypedLinksResponse", + "documentation":"

Returns a paginated list of all the incoming TypedLinkSpecifier information for an object. It also supports filtering by typed link facet and identity attributes. For more information, see Typed link.

" } }, "documentation":"

Represents the output of a BatchRead success response operation.

" @@ -2016,7 +2607,15 @@ "FacetValidationException", "ObjectNotDetachedException", "ResourceNotFoundException", - "AccessDeniedException" + "AccessDeniedException", + "InvalidAttachmentException", + "NotIndexException", + "IndexedAttributeMissingException", + "ObjectAlreadyDetachedException", + "NotPolicyException", + "DirectoryNotEnabledException", + "LimitExceededException", + "UnsupportedIndexTypeException" ] }, "BatchWriteOperation":{ @@ -2049,6 +2648,34 @@ "RemoveFacetFromObject":{ "shape":"BatchRemoveFacetFromObject", "documentation":"

A batch operation that removes a facet from an object.

" + }, + "AttachPolicy":{ + "shape":"BatchAttachPolicy", + "documentation":"

Attaches a policy object to a regular object. An object can have a limited number of attached policies.

" + }, + "DetachPolicy":{ + "shape":"BatchDetachPolicy", + "documentation":"

Detaches a policy from a Directory.

" + }, + "CreateIndex":{ + "shape":"BatchCreateIndex", + "documentation":"

Creates an index object. See Indexing for more information.

" + }, + "AttachToIndex":{ + "shape":"BatchAttachToIndex", + "documentation":"

Attaches the specified object to the specified index.

" + }, + "DetachFromIndex":{ + "shape":"BatchDetachFromIndex", + "documentation":"

Detaches the specified object from the specified index.

" + }, + "AttachTypedLink":{ + "shape":"BatchAttachTypedLink", + "documentation":"

Attaches a typed link to a specified source and target object. For more information, see Typed link.

" + }, + "DetachTypedLink":{ + "shape":"BatchDetachTypedLink", + "documentation":"

Detaches a typed link from a specified source and target object. For more information, see Typed link.

" } }, "documentation":"

Represents the output of a BatchWrite operation.

" @@ -2087,6 +2714,34 @@ "RemoveFacetFromObject":{ "shape":"BatchRemoveFacetFromObjectResponse", "documentation":"

The result of a batch remove facet from object operation.

" + }, + "AttachPolicy":{ + "shape":"BatchAttachPolicyResponse", + "documentation":"

Attaches a policy object to a regular object. An object can have a limited number of attached policies.

" + }, + "DetachPolicy":{ + "shape":"BatchDetachPolicyResponse", + "documentation":"

Detaches a policy from a Directory.

" + }, + "CreateIndex":{ + "shape":"BatchCreateIndexResponse", + "documentation":"

Creates an index object. See Indexing for more information.

" + }, + "AttachToIndex":{ + "shape":"BatchAttachToIndexResponse", + "documentation":"

Attaches the specified object to the specified index.

" + }, + "DetachFromIndex":{ + "shape":"BatchDetachFromIndexResponse", + "documentation":"

Detaches the specified object from the specified index.

" + }, + "AttachTypedLink":{ + "shape":"BatchAttachTypedLinkResponse", + "documentation":"

Attaches a typed link to a specified source and target object. For more information, see Typed link.

" + }, + "DetachTypedLink":{ + "shape":"BatchDetachTypedLinkResponse", + "documentation":"

Detaches a typed link from a specified source and target object. For more information, see Typed link.

" } }, "documentation":"

Represents the output of a BatchWrite response operation.

" @@ -3169,7 +3824,7 @@ }, "TargetReference":{ "shape":"ObjectReference", - "documentation":"

A reference to the object to that has indices attached.

" + "documentation":"

A reference to the object that has indices attached.

" }, "NextToken":{ "shape":"NextToken", @@ -4515,7 +5170,7 @@ }, "IdentityAttributeOrder":{ "shape":"AttributeNameList", - "documentation":"

The set of attributes that distinguish links made from this facet from each other, in the order of significance. Listing typed links can filter on the values of these attributes. See ListOutgoingTypedLinks and ListIncomingTypeLinks for details.

" + "documentation":"

The set of attributes that distinguish links made from this facet from each other, in the order of significance. Listing typed links can filter on the values of these attributes. See ListOutgoingTypedLinks and ListIncomingTypedLinks for details.

" } }, "documentation":"

Defines the typed links structure and its attributes. To create a typed link facet, use the CreateTypedLinkFacet API.

" diff --git a/services/cloudformation/src/main/resources/codegen-resources/service-2.json b/services/cloudformation/src/main/resources/codegen-resources/service-2.json index 7a7c1fd86492..d6387bab0c60 100644 --- a/services/cloudformation/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudformation/src/main/resources/codegen-resources/service-2.json @@ -75,6 +75,45 @@ ], "documentation":"

Creates a stack as specified in the template. After the call completes successfully, the stack creation starts. You can check the status of the stack via the DescribeStacks API.

" }, + "CreateStackInstances":{ + "name":"CreateStackInstances", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateStackInstancesInput"}, + "output":{ + "shape":"CreateStackInstancesOutput", + "resultWrapper":"CreateStackInstancesResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationInProgressException"}, + {"shape":"OperationIdAlreadyExistsException"}, + {"shape":"StaleRequestException"}, + {"shape":"InvalidOperationException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Creates stack instances for the specified accounts, within the specified regions. A stack instance refers to a stack in a specific account and region. Accounts and Regions are required parameters—you must specify at least one account and one region.

" + }, + "CreateStackSet":{ + "name":"CreateStackSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateStackSetInput"}, + "output":{ + "shape":"CreateStackSetOutput", + "resultWrapper":"CreateStackSetResult" + }, + "errors":[ + {"shape":"NameAlreadyExistsException"}, + {"shape":"CreatedButModifiedException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Creates a stack set.

" + }, "DeleteChangeSet":{ "name":"DeleteChangeSet", "http":{ @@ -103,6 +142,43 @@ ], "documentation":"

Deletes a specified stack. Once the call completes successfully, stack deletion starts. Deleted stacks do not show up in the DescribeStacks API if the deletion has been completed successfully.

" }, + "DeleteStackInstances":{ + "name":"DeleteStackInstances", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteStackInstancesInput"}, + "output":{ + "shape":"DeleteStackInstancesOutput", + "resultWrapper":"DeleteStackInstancesResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationInProgressException"}, + {"shape":"OperationIdAlreadyExistsException"}, + {"shape":"StaleRequestException"}, + {"shape":"InvalidOperationException"} + ], + "documentation":"

Deletes stack instances for the specified accounts, in the specified regions.

" + }, + "DeleteStackSet":{ + "name":"DeleteStackSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteStackSetInput"}, + "output":{ + "shape":"DeleteStackSetOutput", + "resultWrapper":"DeleteStackSetResult" + }, + "errors":[ + {"shape":"StackSetNotEmptyException"}, + {"shape":"OperationInProgressException"} + ], + "documentation":"

Deletes a stack set. Before you can delete a stack set, all of its member stack instances must be deleted. For more information about how to do this, see DeleteStackInstances.

" + }, "DescribeAccountLimits":{ "name":"DescribeAccountLimits", "http":{ @@ -145,6 +221,23 @@ }, "documentation":"

Returns all stack related events for a specified stack in reverse chronological order. For more information about a stack's event history, go to Stacks in the AWS CloudFormation User Guide.

You can list events for stacks that have failed to create or have been deleted by specifying the unique stack identifier (stack ID).

" }, + "DescribeStackInstance":{ + "name":"DescribeStackInstance", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeStackInstanceInput"}, + "output":{ + "shape":"DescribeStackInstanceOutput", + "resultWrapper":"DescribeStackInstanceResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"StackInstanceNotFoundException"} + ], + "documentation":"

Returns the stack instance that's associated with the specified stack set, AWS account, and region.

For a list of stack instances that are associated with a specific stack set, use ListStackInstances.

" + }, "DescribeStackResource":{ "name":"DescribeStackResource", "http":{ @@ -171,6 +264,39 @@ }, "documentation":"

Returns AWS resource descriptions for running and deleted stacks. If StackName is specified, all the associated resources that are part of the stack are returned. If PhysicalResourceId is specified, the associated resources of the stack that the resource belongs to are returned.

Only the first 100 resources will be returned. If your stack has more resources than this, you should use ListStackResources instead.

For deleted stacks, DescribeStackResources returns resource information for up to 90 days after the stack has been deleted.

You must specify either StackName or PhysicalResourceId, but not both. In addition, you can specify LogicalResourceId to filter the returned result. For more information about resources, the LogicalResourceId and PhysicalResourceId, go to the AWS CloudFormation User Guide.

A ValidationError is returned if you specify both StackName and PhysicalResourceId in the same request.

" }, + "DescribeStackSet":{ + "name":"DescribeStackSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeStackSetInput"}, + "output":{ + "shape":"DescribeStackSetOutput", + "resultWrapper":"DescribeStackSetResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"} + ], + "documentation":"

Returns the description of the specified stack set.

" + }, + "DescribeStackSetOperation":{ + "name":"DescribeStackSetOperation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeStackSetOperationInput"}, + "output":{ + "shape":"DescribeStackSetOperationOutput", + "resultWrapper":"DescribeStackSetOperationResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationNotFoundException"} + ], + "documentation":"

Returns the description of the specified stack set operation.

" + }, "DescribeStacks":{ "name":"DescribeStacks", "http":{ @@ -256,7 +382,10 @@ "shape":"GetTemplateSummaryOutput", "resultWrapper":"GetTemplateSummaryResult" }, - "documentation":"

Returns information about a new or existing template. The GetTemplateSummary action is useful for viewing parameter information, such as default parameter values and parameter types, before you create or update a stack.

You can use the GetTemplateSummary action when you submit a template, or you can get template information for a running or deleted stack.

For deleted stacks, GetTemplateSummary returns the template information for up to 90 days after the stack has been deleted. If the template does not exist, a ValidationError is returned.

" + "errors":[ + {"shape":"StackSetNotFoundException"} + ], + "documentation":"

Returns information about a new or existing template. The GetTemplateSummary action is useful for viewing parameter information, such as default parameter values and parameter types, before you create or update a stack or stack set.

You can use the GetTemplateSummary action when you submit a template, or you can get template information for a stack set, or a running or deleted stack.

For deleted stacks, GetTemplateSummary returns the template information for up to 90 days after the stack has been deleted. If the template does not exist, a ValidationError is returned.

" }, "ListChangeSets":{ "name":"ListChangeSets", @@ -297,6 +426,22 @@ }, "documentation":"

Lists all stacks that are importing an exported output value. To modify or remove an exported output value, first use this action to see which stacks are using it. To see the exported output values in your account, see ListExports.

For more information about importing an exported output value, see the Fn::ImportValue function.

" }, + "ListStackInstances":{ + "name":"ListStackInstances", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListStackInstancesInput"}, + "output":{ + "shape":"ListStackInstancesOutput", + "resultWrapper":"ListStackInstancesResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"} + ], + "documentation":"

Returns summary information about stack instances that are associated with the specified stack set. You can filter for stack instances that are associated with a specific AWS account name or region.

" + }, "ListStackResources":{ "name":"ListStackResources", "http":{ @@ -310,6 +455,52 @@ }, "documentation":"

Returns descriptions of all resources of the specified stack.

For deleted stacks, ListStackResources returns resource information for up to 90 days after the stack has been deleted.

" }, + "ListStackSetOperationResults":{ + "name":"ListStackSetOperationResults", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListStackSetOperationResultsInput"}, + "output":{ + "shape":"ListStackSetOperationResultsOutput", + "resultWrapper":"ListStackSetOperationResultsResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationNotFoundException"} + ], + "documentation":"

Returns summary information about the results of a stack set operation.

" + }, + "ListStackSetOperations":{ + "name":"ListStackSetOperations", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListStackSetOperationsInput"}, + "output":{ + "shape":"ListStackSetOperationsOutput", + "resultWrapper":"ListStackSetOperationsResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"} + ], + "documentation":"

Returns summary information about operations performed on a stack set.

" + }, + "ListStackSets":{ + "name":"ListStackSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListStackSetsInput"}, + "output":{ + "shape":"ListStackSetsOutput", + "resultWrapper":"ListStackSetsResult" + }, + "documentation":"

Returns summary information about stack sets that are associated with the user.

" + }, "ListStacks":{ "name":"ListStacks", "http":{ @@ -341,6 +532,24 @@ "input":{"shape":"SignalResourceInput"}, "documentation":"

Sends a signal to the specified resource with a success or failure status. You can use the SignalResource API in conjunction with a creation policy or update policy. AWS CloudFormation doesn't proceed with a stack creation or update until resources receive the required number of signals or the timeout period is exceeded. The SignalResource API is useful in cases where you want to send signals from anywhere other than an Amazon EC2 instance.

" }, + "StopStackSetOperation":{ + "name":"StopStackSetOperation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopStackSetOperationInput"}, + "output":{ + "shape":"StopStackSetOperationOutput", + "resultWrapper":"StopStackSetOperationResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationNotFoundException"}, + {"shape":"InvalidOperationException"} + ], + "documentation":"

Stops an in-progress operation on a stack set and its associated stack instances.

" + }, "UpdateStack":{ "name":"UpdateStack", "http":{ @@ -358,6 +567,39 @@ ], "documentation":"

Updates a stack as specified in the template. After the call completes successfully, the stack update starts. You can check the status of the stack via the DescribeStacks action.

To get a copy of the template for an existing stack, you can use the GetTemplate action.

For more information about creating an update template, updating a stack, and monitoring the progress of the update, see Updating a Stack.

" }, + "UpdateStackSet":{ + "name":"UpdateStackSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateStackSetInput"}, + "output":{ + "shape":"UpdateStackSetOutput", + "resultWrapper":"UpdateStackSetResult" + }, + "errors":[ + {"shape":"StackSetNotFoundException"}, + {"shape":"OperationInProgressException"}, + {"shape":"OperationIdAlreadyExistsException"}, + {"shape":"StaleRequestException"}, + {"shape":"InvalidOperationException"} + ], + "documentation":"

Updates the stack set and all associated stack instances.

Even if the stack set operation created by updating the stack set fails (completely or partially, below or above a specified failure tolerance), the stack set is updated with your changes. Subsequent CreateStackInstances calls on the specified stack set use the updated stack set.

" + }, + "UpdateTerminationProtection":{ + "name":"UpdateTerminationProtection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTerminationProtectionInput"}, + "output":{ + "shape":"UpdateTerminationProtectionOutput", + "resultWrapper":"UpdateTerminationProtectionResult" + }, + "documentation":"

Updates termination protection for the specified stack. If a user attempts to delete a stack with termination protection enabled, the operation fails and the stack remains unchanged. For more information, see Protecting a Stack From Being Deleted in the AWS CloudFormation User Guide.

For nested stacks, termination protection is set on the root stack and cannot be changed directly on the nested stack.

" + }, "ValidateTemplate":{ "name":"ValidateTemplate", "http":{ @@ -373,6 +615,33 @@ } }, "shapes":{ + "Account":{ + "type":"string", + "pattern":"[0-9]{12}" + }, + "AccountGateResult":{ + "type":"structure", + "members":{ + "Status":{ + "shape":"AccountGateStatus", + "documentation":"

The status of the account gate function.

" + }, + "StatusReason":{ + "shape":"AccountGateStatusReason", + "documentation":"

The reason for the account gate status assigned to this account and region for the stack set operation.

" + } + }, + "documentation":"

Structure that contains the results of the account gate function which AWS CloudFormation invokes, if present, before proceeding with a stack set operation in an account and region.

For each account and region, AWS CloudFormation lets you specify a Lamdba function that encapsulates any requirements that must be met before CloudFormation can proceed with a stack set operation in that account and region. CloudFormation invokes the function each time a stack set operation is requested for that account and region; if the function returns FAILED, CloudFormation cancels the operation in that account and region, and sets the stack set operation result status for that account and region to FAILED.

For more information, see Configuring a target account gate.

" + }, + "AccountGateStatus":{ + "type":"string", + "enum":[ + "SUCCEEDED", + "FAILED", + "SKIPPED" + ] + }, + "AccountGateStatusReason":{"type":"string"}, "AccountLimit":{ "type":"structure", "members":{ @@ -391,6 +660,10 @@ "type":"list", "member":{"shape":"AccountLimit"} }, + "AccountList":{ + "type":"list", + "member":{"shape":"Account"} + }, "AllowedValue":{"type":"string"}, "AllowedValues":{ "type":"list", @@ -400,7 +673,7 @@ "type":"structure", "members":{ }, - "documentation":"

Resource with the name requested already exists.

", + "documentation":"

The resource with the name requested already exists.

", "error":{ "code":"AlreadyExistsException", "httpStatusCode":400, @@ -408,6 +681,7 @@ }, "exception":true }, + "Arn":{"type":"string"}, "CancelUpdateStackInput":{ "type":"structure", "required":["StackName"], @@ -573,7 +847,7 @@ "type":"string", "max":128, "min":1, - "pattern":"[a-zA-Z][-a-zA-Z0-9]*" + "pattern":"[a-zA-Z0-9][-a-zA-Z0-9]*" }, "ClientToken":{ "type":"string", @@ -594,7 +868,7 @@ }, "ResourcesToSkip":{ "shape":"ResourcesToSkip", - "documentation":"

A list of the logical IDs of the resources that AWS CloudFormation skips during the continue update rollback operation. You can specify only resources that are in the UPDATE_FAILED state because a rollback failed. You can't specify resources that are in the UPDATE_FAILED state for other reasons, for example, because an update was canceled. To check why a resource update failed, use the DescribeStackResources action, and view the resource status reason.

Specify this property to skip rolling back resources that AWS CloudFormation can't successfully roll back. We recommend that you troubleshoot resources before skipping them. AWS CloudFormation sets the status of the specified resources to UPDATE_COMPLETE and continues to roll back the stack. After the rollback is complete, the state of the skipped resources will be inconsistent with the state of the resources in the stack template. Before performing another stack update, you must update the stack or resources to be consistent with each other. If you don't, subsequent stack updates might fail, and the stack will become unrecoverable.

Specify the minimum number of resources required to successfully roll back your stack. For example, a failed resource update might cause dependent resources to fail. In this case, it might not be necessary to skip the dependent resources.

To specify resources in a nested stack, use the following format: NestedStackName.ResourceLogicalID. If the ResourceLogicalID is a stack resource (Type: AWS::CloudFormation::Stack), it must be in one of the following states: DELETE_IN_PROGRESS, DELETE_COMPLETE, or DELETE_FAILED.

" + "documentation":"

A list of the logical IDs of the resources that AWS CloudFormation skips during the continue update rollback operation. You can specify only resources that are in the UPDATE_FAILED state because a rollback failed. You can't specify resources that are in the UPDATE_FAILED state for other reasons, for example, because an update was cancelled. To check why a resource update failed, use the DescribeStackResources action, and view the resource status reason.

Specify this property to skip rolling back resources that AWS CloudFormation can't successfully roll back. We recommend that you troubleshoot resources before skipping them. AWS CloudFormation sets the status of the specified resources to UPDATE_COMPLETE and continues to roll back the stack. After the rollback is complete, the state of the skipped resources will be inconsistent with the state of the resources in the stack template. Before performing another stack update, you must update the stack or resources to be consistent with each other. If you don't, subsequent stack updates might fail, and the stack will become unrecoverable.

Specify the minimum number of resources required to successfully roll back your stack. For example, a failed resource update might cause dependent resources to fail. In this case, it might not be necessary to skip the dependent resources.

To skip resources that are part of nested stacks, use the following format: NestedStackName.ResourceLogicalID. If you want to specify the logical ID of a stack resource (Type: AWS::CloudFormation::Stack) in the ResourcesToSkip list, then its corresponding embedded stack must be in one of the following states: DELETE_IN_PROGRESS, DELETE_COMPLETE, or DELETE_FAILED.

Don't confuse a child stack's name with its corresponding logical ID defined in the parent stack. For an example of a continue update rollback operation with nested stacks, see Using ResourcesToSkip to recover a nested stacks hierarchy.

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", @@ -648,13 +922,17 @@ "shape":"RoleARN", "documentation":"

The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that AWS CloudFormation assumes when executing the change set. AWS CloudFormation uses the role's credentials to make calls on your behalf. AWS CloudFormation uses this role for all future operations on the stack. As long as users have permission to operate on the stack, AWS CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants least privilege.

If you don't specify a value, AWS CloudFormation uses the role that was previously associated with the stack. If no role is available, AWS CloudFormation uses a temporary session that is generated from your user credentials.

" }, + "RollbackConfiguration":{ + "shape":"RollbackConfiguration", + "documentation":"

The rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

" + }, "NotificationARNs":{ "shape":"NotificationARNs", "documentation":"

The Amazon Resource Names (ARNs) of Amazon Simple Notification Service (Amazon SNS) topics that AWS CloudFormation associates with the stack. To remove all associated notification topics, specify an empty list.

" }, "Tags":{ "shape":"Tags", - "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to resources in the stack. You can specify a maximum of 10 tags.

" + "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to resources in the stack. You can specify a maximum of 50 tags.

" }, "ChangeSetName":{ "shape":"ChangeSetName", @@ -713,6 +991,10 @@ "shape":"DisableRollback", "documentation":"

Set to true to disable rollback of the stack if stack creation failed. You can specify either DisableRollback or OnFailure, but not both.

Default: false

" }, + "RollbackConfiguration":{ + "shape":"RollbackConfiguration", + "documentation":"

The rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

" + }, "TimeoutInMinutes":{ "shape":"TimeoutMinutes", "documentation":"

The amount of time that can pass before the stack status becomes CREATE_FAILED; if DisableRollback is not set or is set to false, the stack will be rolled back.

" @@ -747,15 +1029,59 @@ }, "Tags":{ "shape":"Tags", - "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to the resources created in the stack. A maximum number of 10 tags can be specified.

" + "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to the resources created in the stack. A maximum number of 50 tags can be specified.

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

A unique identifier for this CreateStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to create a stack with the same name. You might retry CreateStack requests to ensure that AWS CloudFormation successfully received them.

" + "documentation":"

A unique identifier for this CreateStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to create a stack with the same name. You might retry CreateStack requests to ensure that AWS CloudFormation successfully received them.

All events triggered by a given stack operation are assigned the same client request token, which you can use to track operations. For example, if you execute a CreateStack operation with the token token1, then all the StackEvents generated by that operation will have ClientRequestToken set as token1.

In the console, stack operations display the client request token on the Events tab. Stack operations that are initiated from the console use the token format Console-StackOperation-ID, which helps you easily identify the stack operation . For example, if you create a stack using the console, each stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002.

" + }, + "EnableTerminationProtection":{ + "shape":"EnableTerminationProtection", + "documentation":"

Whether to enable termination protection on the specified stack. If a user attempts to delete a stack with termination protection enabled, the operation fails and the stack remains unchanged. For more information, see Protecting a Stack From Being Deleted in the AWS CloudFormation User Guide. Termination protection is disabled on stacks by default.

For nested stacks, termination protection is set on the root stack and cannot be changed directly on the nested stack.

" } }, "documentation":"

The input for CreateStack action.

" }, + "CreateStackInstancesInput":{ + "type":"structure", + "required":[ + "StackSetName", + "Accounts", + "Regions" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to create stack instances from.

" + }, + "Accounts":{ + "shape":"AccountList", + "documentation":"

The names of one or more AWS accounts that you want to create stack instances in the specified region(s) for.

" + }, + "Regions":{ + "shape":"RegionList", + "documentation":"

The names of one or more regions where you want to create stack instances using the specified AWS account(s).

" + }, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"

Preferences for how AWS CloudFormation performs this stack set operation.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique identifier for this stack set operation.

The operation ID also functions as an idempotency token, to ensure that AWS CloudFormation performs the stack set operation only once, even if you retry the request multiple times. You might retry stack set operation requests to ensure that AWS CloudFormation successfully received them.

If you don't specify an operation ID, the SDK generates one automatically.

Repeating this stack set operation with a new operation ID retries all stack instances whose status is OUTDATED.

", + "idempotencyToken":true + } + } + }, + "CreateStackInstancesOutput":{ + "type":"structure", + "members":{ + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique identifier for this stack set operation.

" + } + } + }, "CreateStackOutput":{ "type":"structure", "members":{ @@ -766,6 +1092,66 @@ }, "documentation":"

The output for a CreateStack action.

" }, + "CreateStackSetInput":{ + "type":"structure", + "required":["StackSetName"], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name to associate with the stack set. The name must be unique in the region where you create your stack set.

A stack name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphabetic character and can't be longer than 128 characters.

" + }, + "Description":{ + "shape":"Description", + "documentation":"

A description of the stack set. You can use the description to identify the stack set's purpose or other important information.

" + }, + "TemplateBody":{ + "shape":"TemplateBody", + "documentation":"

The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes. For more information, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify either the TemplateBody or the TemplateURL parameter, but not both.

" + }, + "TemplateURL":{ + "shape":"TemplateURL", + "documentation":"

The location of the file that contains the template body. The URL must point to a template (maximum size: 460,800 bytes) that's located in an Amazon S3 bucket. For more information, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify either the TemplateBody or the TemplateURL parameter, but not both.

" + }, + "Parameters":{ + "shape":"Parameters", + "documentation":"

The input parameters for the stack set template.

" + }, + "Capabilities":{ + "shape":"Capabilities", + "documentation":"

A list of values that you must specify before AWS CloudFormation can create certain stack sets. Some stack set templates might include resources that can affect permissions in your AWS account—for example, by creating new AWS Identity and Access Management (IAM) users. For those stack sets, you must explicitly acknowledge their capabilities by specifying this parameter.

The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following resources require you to specify this parameter:

If your stack template contains these resources, we recommend that you review all permissions that are associated with them and edit their permissions if necessary.

If you have IAM resources, you can specify either capability. If you have IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

For more information, see Acknowledging IAM Resources in AWS CloudFormation Templates.

" + }, + "Tags":{ + "shape":"Tags", + "documentation":"

The key-value pairs to associate with this stack set and the stacks created from it. AWS CloudFormation also propagates these tags to supported resources that are created in the stacks. A maximum number of 50 tags can be specified.

If you specify tags as part of a CreateStackSet action, AWS CloudFormation checks to see if you have the required IAM permission to tag resources. If you don't, the entire CreateStackSet action fails with an access denied error, and the stack set is not created.

" + }, + "ClientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique identifier for this CreateStackSet request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to create another stack set with the same name. You might retry CreateStackSet requests to ensure that AWS CloudFormation successfully received them.

If you don't specify an operation ID, the SDK generates one automatically.

", + "idempotencyToken":true + } + } + }, + "CreateStackSetOutput":{ + "type":"structure", + "members":{ + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The ID of the stack set that you're creating.

" + } + } + }, + "CreatedButModifiedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified resource exists, but has been changed.

", + "error":{ + "code":"CreatedButModifiedException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, "CreationTime":{"type":"timestamp"}, "DeleteChangeSetInput":{ "type":"structure", @@ -806,11 +1192,71 @@ }, "ClientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

A unique identifier for this DeleteStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to delete a stack with the same name. You might retry DeleteStack requests to ensure that AWS CloudFormation successfully received them.

" + "documentation":"

A unique identifier for this DeleteStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to delete a stack with the same name. You might retry DeleteStack requests to ensure that AWS CloudFormation successfully received them.

All events triggered by a given stack operation are assigned the same client request token, which you can use to track operations. For example, if you execute a CreateStack operation with the token token1, then all the StackEvents generated by that operation will have ClientRequestToken set as token1.

In the console, stack operations display the client request token on the Events tab. Stack operations that are initiated from the console use the token format Console-StackOperation-ID, which helps you easily identify the stack operation . For example, if you create a stack using the console, each stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002.

" } }, "documentation":"

The input for DeleteStack action.

" }, + "DeleteStackInstancesInput":{ + "type":"structure", + "required":[ + "StackSetName", + "Accounts", + "Regions", + "RetainStacks" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to delete stack instances for.

" + }, + "Accounts":{ + "shape":"AccountList", + "documentation":"

The names of the AWS accounts that you want to delete stack instances for.

" + }, + "Regions":{ + "shape":"RegionList", + "documentation":"

The regions where you want to delete stack set instances.

" + }, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"

Preferences for how AWS CloudFormation performs this stack set operation.

" + }, + "RetainStacks":{ + "shape":"RetainStacks", + "documentation":"

Removes the stack instances from the specified stack set, but doesn't delete the stacks. You can't reassociate a retained stack or add an existing, saved stack to a new stack set.

For more information, see Stack set operation options.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique identifier for this stack set operation.

If you don't specify an operation ID, the SDK generates one automatically.

The operation ID also functions as an idempotency token, to ensure that AWS CloudFormation performs the stack set operation only once, even if you retry the request multiple times. You can retry stack set operation requests to ensure that AWS CloudFormation successfully received them.

Repeating this stack set operation with a new operation ID retries all stack instances whose status is OUTDATED.

", + "idempotencyToken":true + } + } + }, + "DeleteStackInstancesOutput":{ + "type":"structure", + "members":{ + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique identifier for this stack set operation.

" + } + } + }, + "DeleteStackSetInput":{ + "type":"structure", + "required":["StackSetName"], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you're deleting. You can obtain this value by running ListStackSets.

" + } + } + }, + "DeleteStackSetOutput":{ + "type":"structure", + "members":{ + } + }, "DeletionTime":{"type":"timestamp"}, "DescribeAccountLimitsInput":{ "type":"structure", @@ -902,6 +1348,10 @@ "shape":"NotificationARNs", "documentation":"

The ARNs of the Amazon Simple Notification Service (Amazon SNS) topics that will be associated with the stack if you execute the change set.

" }, + "RollbackConfiguration":{ + "shape":"RollbackConfiguration", + "documentation":"

The rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

" + }, "Capabilities":{ "shape":"Capabilities", "documentation":"

If you execute the change set, the list of capabilities that were explicitly acknowledged when the change set was created.

" @@ -949,6 +1399,37 @@ }, "documentation":"

The output for a DescribeStackEvents action.

" }, + "DescribeStackInstanceInput":{ + "type":"structure", + "required":[ + "StackSetName", + "StackInstanceAccount", + "StackInstanceRegion" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or the unique stack ID of the stack set that you want to get stack instance information for.

" + }, + "StackInstanceAccount":{ + "shape":"Account", + "documentation":"

The ID of an AWS account that's associated with this stack instance.

" + }, + "StackInstanceRegion":{ + "shape":"Region", + "documentation":"

The name of a region that's associated with this stack instance.

" + } + } + }, + "DescribeStackInstanceOutput":{ + "type":"structure", + "members":{ + "StackInstance":{ + "shape":"StackInstance", + "documentation":"

The stack instance that matches the specified request parameters.

" + } + } + }, "DescribeStackResourceInput":{ "type":"structure", "required":[ @@ -1005,19 +1486,64 @@ }, "documentation":"

The output for a DescribeStackResources action.

" }, - "DescribeStacksInput":{ + "DescribeStackSetInput":{ "type":"structure", + "required":["StackSetName"], "members":{ - "StackName":{ - "shape":"StackName", - "documentation":"

The name or the unique stack ID that is associated with the stack, which are not always interchangeable:

Default: There is no default value.

" - }, - "NextToken":{ - "shape":"NextToken", - "documentation":"

A string that identifies the next page of stacks that you want to retrieve.

" + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set whose description you want.

" } - }, - "documentation":"

The input for DescribeStacks action.

" + } + }, + "DescribeStackSetOperationInput":{ + "type":"structure", + "required":[ + "StackSetName", + "OperationId" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or the unique stack ID of the stack set for the stack operation.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique ID of the stack set operation.

" + } + } + }, + "DescribeStackSetOperationOutput":{ + "type":"structure", + "members":{ + "StackSetOperation":{ + "shape":"StackSetOperation", + "documentation":"

The specified stack set operation.

" + } + } + }, + "DescribeStackSetOutput":{ + "type":"structure", + "members":{ + "StackSet":{ + "shape":"StackSet", + "documentation":"

The specified stack set.

" + } + } + }, + "DescribeStacksInput":{ + "type":"structure", + "members":{ + "StackName":{ + "shape":"StackName", + "documentation":"

The name or the unique stack ID that is associated with the stack, which are not always interchangeable:

Default: There is no default value.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

A string that identifies the next page of stacks that you want to retrieve.

" + } + }, + "documentation":"

The input for DescribeStacks action.

" }, "DescribeStacksOutput":{ "type":"structure", @@ -1039,6 +1565,7 @@ "min":1 }, "DisableRollback":{"type":"boolean"}, + "EnableTerminationProtection":{"type":"boolean"}, "EstimateTemplateCostInput":{ "type":"structure", "members":{ @@ -1135,6 +1662,15 @@ "type":"list", "member":{"shape":"Export"} }, + "FailureToleranceCount":{ + "type":"integer", + "min":0 + }, + "FailureTolerancePercentage":{ + "type":"integer", + "max":100, + "min":0 + }, "GetStackPolicyInput":{ "type":"structure", "required":["StackName"], @@ -1193,15 +1729,19 @@ "members":{ "TemplateBody":{ "shape":"TemplateBody", - "documentation":"

Structure containing the template body with a minimum length of 1 byte and a maximum length of 51,200 bytes. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, TemplateBody, or TemplateURL.

" + "documentation":"

Structure containing the template body with a minimum length of 1 byte and a maximum length of 51,200 bytes. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, StackSetName, TemplateBody, or TemplateURL.

" }, "TemplateURL":{ "shape":"TemplateURL", - "documentation":"

Location of file containing the template body. The URL must point to a template (max size: 460,800 bytes) that is located in an Amazon S3 bucket. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, TemplateBody, or TemplateURL.

" + "documentation":"

Location of file containing the template body. The URL must point to a template (max size: 460,800 bytes) that is located in an Amazon S3 bucket. For more information about templates, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: StackName, StackSetName, TemplateBody, or TemplateURL.

" }, "StackName":{ "shape":"StackNameOrId", - "documentation":"

The name or the stack ID that is associated with the stack, which are not always interchangeable. For running stacks, you can specify either the stack's name or its unique stack ID. For deleted stack, you must specify the unique stack ID.

Conditional: You must specify only one of the following parameters: StackName, TemplateBody, or TemplateURL.

" + "documentation":"

The name or the stack ID that is associated with the stack, which are not always interchangeable. For running stacks, you can specify either the stack's name or its unique stack ID. For deleted stack, you must specify the unique stack ID.

Conditional: You must specify only one of the following parameters: StackName, StackSetName, TemplateBody, or TemplateURL.

" + }, + "StackSetName":{ + "shape":"StackSetNameOrId", + "documentation":"

The name or unique ID of the stack set from which the stack was created.

Conditional: You must specify only one of the following parameters: StackName, StackSetName, TemplateBody, or TemplateURL.

" } }, "documentation":"

The input for the GetTemplateSummary action.

" @@ -1252,7 +1792,7 @@ "type":"structure", "members":{ }, - "documentation":"

The template contains resources with capabilities that were not specified in the Capabilities parameter.

", + "documentation":"

The template contains resources with capabilities that weren't specified in the Capabilities parameter.

", "error":{ "code":"InsufficientCapabilitiesException", "httpStatusCode":400, @@ -1264,7 +1804,7 @@ "type":"structure", "members":{ }, - "documentation":"

The specified change set cannot be used to update the stack. For example, the change set status might be CREATE_IN_PROGRESS or the stack status might be UPDATE_IN_PROGRESS.

", + "documentation":"

The specified change set can't be used to update the stack. For example, the change set status might be CREATE_IN_PROGRESS, or the stack status might be UPDATE_IN_PROGRESS.

", "error":{ "code":"InvalidChangeSetStatus", "httpStatusCode":400, @@ -1272,12 +1812,24 @@ }, "exception":true }, + "InvalidOperationException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified operation isn't valid.

", + "error":{ + "code":"InvalidOperationException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "LastUpdatedTime":{"type":"timestamp"}, "LimitExceededException":{ "type":"structure", "members":{ }, - "documentation":"

Quota for the resource has already been reached.

", + "documentation":"

The quota for the resource has already been reached.

For information on stack set limitations, see Limitations of StackSets.

", "error":{ "code":"LimitExceededException", "httpStatusCode":400, @@ -1365,6 +1917,45 @@ } } }, + "ListStackInstancesInput":{ + "type":"structure", + "required":["StackSetName"], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to list stack instances for.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the previous request didn't return all of the remaining results, the response's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackInstances again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken value that you can assign to the NextToken request parameter to get the next set of results.

" + }, + "StackInstanceAccount":{ + "shape":"Account", + "documentation":"

The name of the AWS account that you want to list stack instances for.

" + }, + "StackInstanceRegion":{ + "shape":"Region", + "documentation":"

The name of the region where you want to list stack instances.

" + } + } + }, + "ListStackInstancesOutput":{ + "type":"structure", + "members":{ + "Summaries":{ + "shape":"StackInstanceSummaries", + "documentation":"

A list of StackInstanceSummary structures that contain information about the specified stack instances.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the request doesn't return all of the remaining results, NextToken is set to a token. To retrieve the next set of results, call ListStackInstances again and assign that token to the request object's NextToken parameter. If the request returns all results, NextToken is set to null.

" + } + } + }, "ListStackResourcesInput":{ "type":"structure", "required":["StackName"], @@ -1394,6 +1985,105 @@ }, "documentation":"

The output for a ListStackResources action.

" }, + "ListStackSetOperationResultsInput":{ + "type":"structure", + "required":[ + "StackSetName", + "OperationId" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to get operation results for.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The ID of the stack set operation.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the previous request didn't return all of the remaining results, the response object's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackSetOperationResults again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken value that you can assign to the NextToken request parameter to get the next set of results.

" + } + } + }, + "ListStackSetOperationResultsOutput":{ + "type":"structure", + "members":{ + "Summaries":{ + "shape":"StackSetOperationResultSummaries", + "documentation":"

A list of StackSetOperationResultSummary structures that contain information about the specified operation results, for accounts and regions that are included in the operation.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the request doesn't return all results, NextToken is set to a token. To retrieve the next set of results, call ListOperationResults again and assign that token to the request object's NextToken parameter. If there are no remaining results, NextToken is set to null.

" + } + } + }, + "ListStackSetOperationsInput":{ + "type":"structure", + "required":["StackSetName"], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to get operation summaries for.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the previous paginated request didn't return all of the remaining results, the response object's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackSetOperations again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken value that you can assign to the NextToken request parameter to get the next set of results.

" + } + } + }, + "ListStackSetOperationsOutput":{ + "type":"structure", + "members":{ + "Summaries":{ + "shape":"StackSetOperationSummaries", + "documentation":"

A list of StackSetOperationSummary structures that contain summary information about operations for the specified stack set.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the request doesn't return all results, NextToken is set to a token. To retrieve the next set of results, call ListOperationResults again and assign that token to the request object's NextToken parameter. If there are no remaining results, NextToken is set to null.

" + } + } + }, + "ListStackSetsInput":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the previous paginated request didn't return all of the remaining results, the response object's NextToken parameter value is set to a token. To retrieve the next set of results, call ListStackSets again and assign that token to the request object's NextToken parameter. If there are no remaining results, the previous response object's NextToken parameter is set to null.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken value that you can assign to the NextToken request parameter to get the next set of results.

" + }, + "Status":{ + "shape":"StackSetStatus", + "documentation":"

The status of the stack sets that you want to get summary information about.

" + } + } + }, + "ListStackSetsOutput":{ + "type":"structure", + "members":{ + "Summaries":{ + "shape":"StackSetSummaries", + "documentation":"

A list of StackSetSummary structures that contain information about the user's stack sets.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If the request doesn't return all of the remaining results, NextToken is set to a token. To retrieve the next set of results, call ListStackInstances again and assign that token to the request object's NextToken parameter. If the request returns all results, NextToken is set to null.

" + } + } + }, "ListStacksInput":{ "type":"structure", "members":{ @@ -1423,7 +2113,38 @@ "documentation":"

The output for ListStacks action.

" }, "LogicalResourceId":{"type":"string"}, + "MaxConcurrentCount":{ + "type":"integer", + "min":1 + }, + "MaxConcurrentPercentage":{ + "type":"integer", + "max":100, + "min":1 + }, + "MaxResults":{ + "type":"integer", + "max":100, + "min":1 + }, "Metadata":{"type":"string"}, + "MonitoringTimeInMinutes":{ + "type":"integer", + "max":180, + "min":0 + }, + "NameAlreadyExistsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified name is already in use.

", + "error":{ + "code":"NameAlreadyExistsException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, "NextToken":{ "type":"string", "max":1024, @@ -1444,6 +2165,42 @@ "DELETE" ] }, + "OperationIdAlreadyExistsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified operation ID already exists.

", + "error":{ + "code":"OperationIdAlreadyExistsException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, + "OperationInProgressException":{ + "type":"structure", + "members":{ + }, + "documentation":"

Another operation is currently in progress for this stack set. Only one operation can be performed for a stack set at a given time.

", + "error":{ + "code":"OperationInProgressException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, + "OperationNotFoundException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified ID refers to an operation that doesn't exist.

", + "error":{ + "code":"OperationNotFoundException", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, "Output":{ "type":"structure", "members":{ @@ -1458,6 +2215,10 @@ "Description":{ "shape":"Description", "documentation":"

User defined description associated with the output.

" + }, + "ExportName":{ + "shape":"ExportName", + "documentation":"

The name of the export associated with the output.

" } }, "documentation":"

The Output data type.

" @@ -1539,6 +2300,12 @@ }, "PhysicalResourceId":{"type":"string"}, "PropertyName":{"type":"string"}, + "Reason":{"type":"string"}, + "Region":{"type":"string"}, + "RegionList":{ + "type":"list", + "member":{"shape":"Region"} + }, "Replacement":{ "type":"string", "enum":[ @@ -1694,11 +2461,50 @@ "type":"list", "member":{"shape":"LogicalResourceId"} }, + "RetainStacks":{"type":"boolean"}, + "RetainStacksNullable":{"type":"boolean"}, "RoleARN":{ "type":"string", "max":2048, "min":20 }, + "RollbackConfiguration":{ + "type":"structure", + "members":{ + "RollbackTriggers":{ + "shape":"RollbackTriggers", + "documentation":"

The triggers to monitor during stack creation or update actions.

By default, AWS CloudFormation saves the rollback triggers specified for a stack and applies them to any subsequent update operations for the stack, unless you specify otherwise. If you do specify rollback triggers for this parameter, those triggers replace any list of triggers previously specified for the stack. This means:

If a specified Cloudwatch alarm is missing, the entire stack operation fails and is rolled back.

" + }, + "MonitoringTimeInMinutes":{ + "shape":"MonitoringTimeInMinutes", + "documentation":"

The amount of time, in minutes, during which CloudFormation should monitor all the rollback triggers after the stack creation or update operation deploys all necessary resources. If any of the alarms goes to ALERT state during the stack operation or this monitoring period, CloudFormation rolls back the entire stack operation. Then, for update operations, if the monitoring period expires without any alarms going to ALERT state CloudFormation proceeds to dispose of old resources as usual.

If you specify a monitoring period but do not specify any rollback triggers, CloudFormation still waits the specified period of time before cleaning up old resources for update operations. You can use this monitoring period to perform any manual stack validation desired, and manually cancel the stack creation or update (using CancelUpdateStack, for example) as necessary.

If you specify 0 for this parameter, CloudFormation still monitors the specified rollback triggers during stack creation and update operations. Then, for update operations, it begins disposing of old resources immediately once the operation completes.

" + } + }, + "documentation":"

Structure containing the rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

Rollback triggers enable you to have AWS CloudFormation monitor the state of your application during stack creation and updating, and to roll back that operation if the application breaches the threshold of any of the alarms you've specified. For each rollback trigger you create, you specify the Cloudwatch alarm that CloudFormation should monitor. CloudFormation monitors the specified alarms during the stack create or update operation, and for the specified amount of time after all resources have been deployed. If any of the alarms goes to ALERT state during the stack operation or the monitoring period, CloudFormation rolls back the entire stack operation. If the monitoring period expires without any alarms going to ALERT state, CloudFormation proceeds to dispose of old resources as usual.

By default, CloudFormation only rolls back stack operations if an alarm goes to ALERT state, not INSUFFICIENT_DATA state. To have CloudFormation roll back the stack operation if an alarm goes to INSUFFICIENT_DATA state as well, edit the CloudWatch alarm to treat missing data as breaching. For more information, see Configuring How CloudWatch Alarms Treats Missing Data.

AWS CloudFormation does not monitor rollback triggers when it rolls back a stack during an update operation.

" + }, + "RollbackTrigger":{ + "type":"structure", + "required":[ + "Arn", + "Type" + ], + "members":{ + "Arn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the rollback trigger.

" + }, + "Type":{ + "shape":"Type", + "documentation":"

The resource type of the rollback trigger. Currently, AWS::CloudWatch::Alarm is the only supported resource type.

" + } + }, + "documentation":"

A rollback trigger AWS CloudFormation monitors during creation and updating of stacks. If any of the alarms you specify goes to ALERT state during the stack operation or within the specified monitoring period afterwards, CloudFormation rolls back the entire stack operation.

" + }, + "RollbackTriggers":{ + "type":"list", + "member":{"shape":"RollbackTrigger"}, + "max":5 + }, "Scope":{ "type":"list", "member":{"shape":"ResourceAttribute"} @@ -1782,10 +2588,18 @@ "shape":"CreationTime", "documentation":"

The time at which the stack was created.

" }, + "DeletionTime":{ + "shape":"DeletionTime", + "documentation":"

The time the stack was deleted.

" + }, "LastUpdatedTime":{ "shape":"LastUpdatedTime", "documentation":"

The time the stack was last updated. This field will only be returned if the stack has been updated at least once.

" }, + "RollbackConfiguration":{ + "shape":"RollbackConfiguration", + "documentation":"

The rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

" + }, "StackStatus":{ "shape":"StackStatus", "documentation":"

Current status of the stack.

" @@ -1821,6 +2635,18 @@ "Tags":{ "shape":"Tags", "documentation":"

A list of Tags that specify information about the stack.

" + }, + "EnableTerminationProtection":{ + "shape":"EnableTerminationProtection", + "documentation":"

Whether termination protection is enabled for the stack.

For nested stacks, termination protection is set on the root stack and cannot be changed directly on the nested stack. For more information, see Protecting a Stack From Being Deleted in the AWS CloudFormation User Guide.

" + }, + "ParentId":{ + "shape":"StackId", + "documentation":"

For nested stacks--stacks created as resources for another stack--the stack ID of the direct parent of this stack. For the first level of nested stacks, the root stack is also the parent stack.

For more information, see Working with Nested Stacks in the AWS CloudFormation User Guide.

" + }, + "RootId":{ + "shape":"StackId", + "documentation":"

For nested stacks--stacks created as resources for another stack--the stack ID of the the top-level stack to which the nested stack ultimately belongs.

For more information, see Working with Nested Stacks in the AWS CloudFormation User Guide.

" } }, "documentation":"

The Stack data type.

" @@ -1876,7 +2702,7 @@ }, "ClientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

The token passed to the operation that generated this event.

For example, if you execute a CreateStack operation with the token token1, then all the StackEvents generated by that operation will have ClientRequestToken set as token1.

" + "documentation":"

The token passed to the operation that generated this event.

All events triggered by a given stack operation are assigned the same client request token, which you can use to track operations. For example, if you execute a CreateStack operation with the token token1, then all the StackEvents generated by that operation will have ClientRequestToken set as token1.

In the console, stack operations display the client request token on the Events tab. Stack operations that are initiated from the console use the token format Console-StackOperation-ID, which helps you easily identify the stack operation . For example, if you create a stack using the console, each stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002.

" } }, "documentation":"

The StackEvent data type.

" @@ -1886,6 +2712,90 @@ "member":{"shape":"StackEvent"} }, "StackId":{"type":"string"}, + "StackInstance":{ + "type":"structure", + "members":{ + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The name or unique ID of the stack set that the stack instance is associated with.

" + }, + "Region":{ + "shape":"Region", + "documentation":"

The name of the AWS region that the stack instance is associated with.

" + }, + "Account":{ + "shape":"Account", + "documentation":"

The name of the AWS account that the stack instance is associated with.

" + }, + "StackId":{ + "shape":"StackId", + "documentation":"

The ID of the stack instance.

" + }, + "Status":{ + "shape":"StackInstanceStatus", + "documentation":"

The status of the stack instance, in terms of its synchronization with its associated stack set.

" + }, + "StatusReason":{ + "shape":"Reason", + "documentation":"

The explanation for the specific status code that is assigned to this stack instance.

" + } + }, + "documentation":"

An AWS CloudFormation stack, in a specific account and region, that's part of a stack set operation. A stack instance is a reference to an attempted or actual stack in a given account within a given region. A stack instance can exist without a stack—for example, if the stack couldn't be created for some reason. A stack instance is associated with only one stack set. Each stack instance contains the ID of its associated stack set, as well as the ID of the actual stack and the stack status.

" + }, + "StackInstanceNotFoundException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified stack instance doesn't exist.

", + "error":{ + "code":"StackInstanceNotFoundException", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "StackInstanceStatus":{ + "type":"string", + "enum":[ + "CURRENT", + "OUTDATED", + "INOPERABLE" + ] + }, + "StackInstanceSummaries":{ + "type":"list", + "member":{"shape":"StackInstanceSummary"} + }, + "StackInstanceSummary":{ + "type":"structure", + "members":{ + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The name or unique ID of the stack set that the stack instance is associated with.

" + }, + "Region":{ + "shape":"Region", + "documentation":"

The name of the AWS region that the stack instance is associated with.

" + }, + "Account":{ + "shape":"Account", + "documentation":"

The name of the AWS account that the stack instance is associated with.

" + }, + "StackId":{ + "shape":"StackId", + "documentation":"

The ID of the stack instance.

" + }, + "Status":{ + "shape":"StackInstanceStatus", + "documentation":"

The status of the stack instance, in terms of its synchronization with its associated stack set.

" + }, + "StatusReason":{ + "shape":"Reason", + "documentation":"

The explanation for the specific status code assigned to this stack instance.

" + } + }, + "documentation":"

The structure that contains summary information about a stack instance.

" + }, "StackName":{"type":"string"}, "StackNameOrId":{ "type":"string", @@ -2056,6 +2966,260 @@ "type":"list", "member":{"shape":"StackResource"} }, + "StackSet":{ + "type":"structure", + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name that's associated with the stack set.

" + }, + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The ID of the stack set.

" + }, + "Description":{ + "shape":"Description", + "documentation":"

A description of the stack set that you specify when the stack set is created or updated.

" + }, + "Status":{ + "shape":"StackSetStatus", + "documentation":"

The status of the stack set.

" + }, + "TemplateBody":{ + "shape":"TemplateBody", + "documentation":"

The structure that contains the body of the template that was used to create or update the stack set.

" + }, + "Parameters":{ + "shape":"Parameters", + "documentation":"

A list of input parameters for a stack set.

" + }, + "Capabilities":{ + "shape":"Capabilities", + "documentation":"

The capabilities that are allowed in the stack set. Some stack set templates might include resources that can affect permissions in your AWS account—for example, by creating new AWS Identity and Access Management (IAM) users. For more information, see Acknowledging IAM Resources in AWS CloudFormation Templates.

" + }, + "Tags":{ + "shape":"Tags", + "documentation":"

A list of tags that specify information about the stack set. A maximum number of 50 tags can be specified.

" + } + }, + "documentation":"

A structure that contains information about a stack set. A stack set enables you to provision stacks into AWS accounts and across regions by using a single CloudFormation template. In the stack set, you specify the template to use, as well as any parameters and capabilities that the template requires.

" + }, + "StackSetId":{"type":"string"}, + "StackSetName":{"type":"string"}, + "StackSetNameOrId":{ + "type":"string", + "min":1, + "pattern":"[a-zA-Z][-a-zA-Z0-9]*" + }, + "StackSetNotEmptyException":{ + "type":"structure", + "members":{ + }, + "documentation":"

You can't yet delete this stack set, because it still contains one or more stack instances. Delete all stack instances from the stack set before deleting the stack set.

", + "error":{ + "code":"StackSetNotEmptyException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, + "StackSetNotFoundException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified stack set doesn't exist.

", + "error":{ + "code":"StackSetNotFoundException", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "StackSetOperation":{ + "type":"structure", + "members":{ + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique ID of a stack set operation.

" + }, + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The ID of the stack set.

" + }, + "Action":{ + "shape":"StackSetOperationAction", + "documentation":"

The type of stack set operation: CREATE, UPDATE, or DELETE. Create and delete operations affect only the specified stack set instances that are associated with the specified stack set. Update operations affect both the stack set itself, as well as all associated stack set instances.

" + }, + "Status":{ + "shape":"StackSetOperationStatus", + "documentation":"

The status of the operation.

" + }, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"

The preferences for how AWS CloudFormation performs this stack set operation.

" + }, + "RetainStacks":{ + "shape":"RetainStacksNullable", + "documentation":"

For stack set operations of action type DELETE, specifies whether to remove the stack instances from the specified stack set, but doesn't delete the stacks. You can't reassociate a retained stack, or add an existing, saved stack to a new stack set.

" + }, + "CreationTimestamp":{ + "shape":"Timestamp", + "documentation":"

The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested regions, before actually creating the first stacks.

" + }, + "EndTimestamp":{ + "shape":"Timestamp", + "documentation":"

The time at which the stack set operation ended, across all accounts and regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or region.

" + } + }, + "documentation":"

The structure that contains information about a stack set operation.

" + }, + "StackSetOperationAction":{ + "type":"string", + "enum":[ + "CREATE", + "UPDATE", + "DELETE" + ] + }, + "StackSetOperationPreferences":{ + "type":"structure", + "members":{ + "RegionOrder":{ + "shape":"RegionList", + "documentation":"

The order of the regions in where you want to perform the stack operation.

" + }, + "FailureToleranceCount":{ + "shape":"FailureToleranceCount", + "documentation":"

The number of accounts, per region, for which this operation can fail before AWS CloudFormation stops the operation in that region. If the operation is stopped in a region, AWS CloudFormation doesn't attempt the operation in any subsequent regions.

Conditional: You must specify either FailureToleranceCount or FailureTolerancePercentage (but not both).

" + }, + "FailureTolerancePercentage":{ + "shape":"FailureTolerancePercentage", + "documentation":"

The percentage of accounts, per region, for which this stack operation can fail before AWS CloudFormation stops the operation in that region. If the operation is stopped in a region, AWS CloudFormation doesn't attempt the operation in any subsequent regions.

When calculating the number of accounts based on the specified percentage, AWS CloudFormation rounds down to the next whole number.

Conditional: You must specify either FailureToleranceCount or FailureTolerancePercentage, but not both.

" + }, + "MaxConcurrentCount":{ + "shape":"MaxConcurrentCount", + "documentation":"

The maximum number of accounts in which to perform this operation at one time. This is dependent on the value of FailureToleranceCount—MaxConcurrentCount is at most one more than the FailureToleranceCount .

Note that this setting lets you specify the maximum for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.

Conditional: You must specify either MaxConcurrentCount or MaxConcurrentPercentage, but not both.

" + }, + "MaxConcurrentPercentage":{ + "shape":"MaxConcurrentPercentage", + "documentation":"

The maximum percentage of accounts in which to perform this operation at one time.

When calculating the number of accounts based on the specified percentage, AWS CloudFormation rounds down to the next whole number. This is true except in cases where rounding down would result is zero. In this case, CloudFormation sets the number as one instead.

Note that this setting lets you specify the maximum for operations. For large deployments, under certain circumstances the actual number of accounts acted upon concurrently may be lower due to service throttling.

Conditional: You must specify either MaxConcurrentCount or MaxConcurrentPercentage, but not both.

" + } + }, + "documentation":"

The user-specified preferences for how AWS CloudFormation performs a stack set operation.

For more information on maximum concurrent accounts and failure tolerance, see Stack set operation options.

" + }, + "StackSetOperationResultStatus":{ + "type":"string", + "enum":[ + "PENDING", + "RUNNING", + "SUCCEEDED", + "FAILED", + "CANCELLED" + ] + }, + "StackSetOperationResultSummaries":{ + "type":"list", + "member":{"shape":"StackSetOperationResultSummary"} + }, + "StackSetOperationResultSummary":{ + "type":"structure", + "members":{ + "Account":{ + "shape":"Account", + "documentation":"

The name of the AWS account for this operation result.

" + }, + "Region":{ + "shape":"Region", + "documentation":"

The name of the AWS region for this operation result.

" + }, + "Status":{ + "shape":"StackSetOperationResultStatus", + "documentation":"

The result status of the stack set operation for the given account in the given region.

" + }, + "StatusReason":{ + "shape":"Reason", + "documentation":"

The reason for the assigned result status.

" + }, + "AccountGateResult":{ + "shape":"AccountGateResult", + "documentation":"

The results of the account gate function AWS CloudFormation invokes, if present, before proceeding with stack set operations in an account

" + } + }, + "documentation":"

The structure that contains information about a specified operation's results for a given account in a given region.

" + }, + "StackSetOperationStatus":{ + "type":"string", + "enum":[ + "RUNNING", + "SUCCEEDED", + "FAILED", + "STOPPING", + "STOPPED" + ] + }, + "StackSetOperationSummaries":{ + "type":"list", + "member":{"shape":"StackSetOperationSummary"} + }, + "StackSetOperationSummary":{ + "type":"structure", + "members":{ + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique ID of the stack set operation.

" + }, + "Action":{ + "shape":"StackSetOperationAction", + "documentation":"

The type of operation: CREATE, UPDATE, or DELETE. Create and delete operations affect only the specified stack instances that are associated with the specified stack set. Update operations affect both the stack set itself as well as all associated stack set instances.

" + }, + "Status":{ + "shape":"StackSetOperationStatus", + "documentation":"

The overall status of the operation.

" + }, + "CreationTimestamp":{ + "shape":"Timestamp", + "documentation":"

The time at which the operation was initiated. Note that the creation times for the stack set operation might differ from the creation time of the individual stacks themselves. This is because AWS CloudFormation needs to perform preparatory work for the operation, such as dispatching the work to the requested regions, before actually creating the first stacks.

" + }, + "EndTimestamp":{ + "shape":"Timestamp", + "documentation":"

The time at which the stack set operation ended, across all accounts and regions specified. Note that this doesn't necessarily mean that the stack set operation was successful, or even attempted, in each account or region.

" + } + }, + "documentation":"

The structures that contain summary information about the specified operation.

" + }, + "StackSetStatus":{ + "type":"string", + "enum":[ + "ACTIVE", + "DELETED" + ] + }, + "StackSetSummaries":{ + "type":"list", + "member":{"shape":"StackSetSummary"} + }, + "StackSetSummary":{ + "type":"structure", + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name of the stack set.

" + }, + "StackSetId":{ + "shape":"StackSetId", + "documentation":"

The ID of the stack set.

" + }, + "Description":{ + "shape":"Description", + "documentation":"

A description of the stack set that you specify when the stack set is created or updated.

" + }, + "Status":{ + "shape":"StackSetStatus", + "documentation":"

The status of the stack set.

" + } + }, + "documentation":"

The structures that contain summary information about the specified stack set.

" + }, "StackStatus":{ "type":"string", "enum":[ @@ -2126,6 +3290,14 @@ "StackStatusReason":{ "shape":"StackStatusReason", "documentation":"

Success/Failure message associated with the stack status.

" + }, + "ParentId":{ + "shape":"StackId", + "documentation":"

For nested stacks--stacks created as resources for another stack--the stack ID of the direct parent of this stack. For the first level of nested stacks, the root stack is also the parent stack.

For more information, see Working with Nested Stacks in the AWS CloudFormation User Guide.

" + }, + "RootId":{ + "shape":"StackId", + "documentation":"

For nested stacks--stacks created as resources for another stack--the stack ID of the the top-level stack to which the nested stack ultimately belongs.

For more information, see Working with Nested Stacks in the AWS CloudFormation User Guide.

" } }, "documentation":"

The StackSummary Data Type

" @@ -2138,8 +3310,46 @@ "type":"list", "member":{"shape":"TemplateStage"} }, + "StaleRequestException":{ + "type":"structure", + "members":{ + }, + "documentation":"

Another operation has been performed on this stack set since the specified operation was performed.

", + "error":{ + "code":"StaleRequestException", + "httpStatusCode":409, + "senderFault":true + }, + "exception":true + }, + "StopStackSetOperationInput":{ + "type":"structure", + "required":[ + "StackSetName", + "OperationId" + ], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to stop the operation for.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The ID of the stack operation.

" + } + } + }, + "StopStackSetOperationOutput":{ + "type":"structure", + "members":{ + } + }, "Tag":{ "type":"structure", + "required":[ + "Key", + "Value" + ], "members":{ "Key":{ "shape":"TagKey", @@ -2152,11 +3362,20 @@ }, "documentation":"

The Tag type enables you to specify a key-value pair that can be used to store information about an AWS CloudFormation stack.

" }, - "TagKey":{"type":"string"}, - "TagValue":{"type":"string"}, + "TagKey":{ + "type":"string", + "max":128, + "min":1 + }, + "TagValue":{ + "type":"string", + "max":256, + "min":1 + }, "Tags":{ "type":"list", - "member":{"shape":"Tag"} + "member":{"shape":"Tag"}, + "max":50 }, "TemplateBody":{ "type":"string", @@ -2223,6 +3442,7 @@ "type":"list", "member":{"shape":"TransformName"} }, + "Type":{"type":"string"}, "UpdateStackInput":{ "type":"structure", "required":["StackName"], @@ -2267,6 +3487,10 @@ "shape":"RoleARN", "documentation":"

The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) role that AWS CloudFormation assumes to update the stack. AWS CloudFormation uses the role's credentials to make calls on your behalf. AWS CloudFormation always uses this role for all future operations on the stack. As long as users have permission to operate on the stack, AWS CloudFormation uses this role even if the users don't have permission to pass it. Ensure that the role grants least privilege.

If you don't specify a value, AWS CloudFormation uses the role that was previously associated with the stack. If no role is available, AWS CloudFormation uses a temporary session that is generated from your user credentials.

" }, + "RollbackConfiguration":{ + "shape":"RollbackConfiguration", + "documentation":"

The rollback triggers for AWS CloudFormation to monitor during stack creation and updating operations, and for the specified monitoring period afterwards.

" + }, "StackPolicyBody":{ "shape":"StackPolicyBody", "documentation":"

Structure containing a new stack policy body. You can specify either the StackPolicyBody or the StackPolicyURL parameter, but not both.

You might update the stack policy, for example, in order to protect a new resource that you created during a stack update. If you do not specify a stack policy, the current policy that is associated with the stack is unchanged.

" @@ -2281,11 +3505,11 @@ }, "Tags":{ "shape":"Tags", - "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to supported resources in the stack. You can specify a maximum number of 10 tags.

If you don't specify this parameter, AWS CloudFormation doesn't modify the stack's tags. If you specify an empty value, AWS CloudFormation removes all associated tags.

" + "documentation":"

Key-value pairs to associate with this stack. AWS CloudFormation also propagates these tags to supported resources in the stack. You can specify a maximum number of 50 tags.

If you don't specify this parameter, AWS CloudFormation doesn't modify the stack's tags. If you specify an empty value, AWS CloudFormation removes all associated tags.

" }, "ClientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

A unique identifier for this UpdateStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to update a stack with the same name. You might retry UpdateStack requests to ensure that AWS CloudFormation successfully received them.

" + "documentation":"

A unique identifier for this UpdateStack request. Specify this token if you plan to retry requests so that AWS CloudFormation knows that you're not attempting to update a stack with the same name. You might retry UpdateStack requests to ensure that AWS CloudFormation successfully received them.

All events triggered by a given stack operation are assigned the same client request token, which you can use to track operations. For example, if you execute a CreateStack operation with the token token1, then all the StackEvents generated by that operation will have ClientRequestToken set as token1.

In the console, stack operations display the client request token on the Events tab. Stack operations that are initiated from the console use the token format Console-StackOperation-ID, which helps you easily identify the stack operation . For example, if you create a stack using the console, each stack event would be assigned the same token in the following format: Console-CreateStack-7f59c3cf-00d2-40c7-b2ff-e75db0987002.

" } }, "documentation":"

The input for an UpdateStack action.

" @@ -2300,6 +3524,88 @@ }, "documentation":"

The output for an UpdateStack action.

" }, + "UpdateStackSetInput":{ + "type":"structure", + "required":["StackSetName"], + "members":{ + "StackSetName":{ + "shape":"StackSetName", + "documentation":"

The name or unique ID of the stack set that you want to update.

" + }, + "Description":{ + "shape":"Description", + "documentation":"

A brief description of updates that you are making.

" + }, + "TemplateBody":{ + "shape":"TemplateBody", + "documentation":"

The structure that contains the template body, with a minimum length of 1 byte and a maximum length of 51,200 bytes. For more information, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: TemplateBody or TemplateURL—or set UsePreviousTemplate to true.

" + }, + "TemplateURL":{ + "shape":"TemplateURL", + "documentation":"

The location of the file that contains the template body. The URL must point to a template (maximum size: 460,800 bytes) that is located in an Amazon S3 bucket. For more information, see Template Anatomy in the AWS CloudFormation User Guide.

Conditional: You must specify only one of the following parameters: TemplateBody or TemplateURL—or set UsePreviousTemplate to true.

" + }, + "UsePreviousTemplate":{ + "shape":"UsePreviousTemplate", + "documentation":"

Use the existing template that's associated with the stack set that you're updating.

Conditional: You must specify only one of the following parameters: TemplateBody or TemplateURL—or set UsePreviousTemplate to true.

" + }, + "Parameters":{ + "shape":"Parameters", + "documentation":"

A list of input parameters for the stack set template.

" + }, + "Capabilities":{ + "shape":"Capabilities", + "documentation":"

A list of values that you must specify before AWS CloudFormation can create certain stack sets. Some stack set templates might include resources that can affect permissions in your AWS account—for example, by creating new AWS Identity and Access Management (IAM) users. For those stack sets, you must explicitly acknowledge their capabilities by specifying this parameter.

The only valid values are CAPABILITY_IAM and CAPABILITY_NAMED_IAM. The following resources require you to specify this parameter:

If your stack template contains these resources, we recommend that you review all permissions that are associated with them and edit their permissions if necessary.

If you have IAM resources, you can specify either capability. If you have IAM resources with custom names, you must specify CAPABILITY_NAMED_IAM. If you don't specify this parameter, this action returns an InsufficientCapabilities error.

For more information, see Acknowledging IAM Resources in AWS CloudFormation Templates.

" + }, + "Tags":{ + "shape":"Tags", + "documentation":"

The key-value pairs to associate with this stack set and the stacks created from it. AWS CloudFormation also propagates these tags to supported resources that are created in the stacks. You can specify a maximum number of 50 tags.

If you specify tags for this parameter, those tags replace any list of tags that are currently associated with this stack set. This means:

If you specify new tags as part of an UpdateStackSet action, AWS CloudFormation checks to see if you have the required IAM permission to tag resources. If you omit tags that are currently associated with the stack set from the list of tags you specify, AWS CloudFormation assumes that you want to remove those tags from the stack set, and checks to see if you have permission to untag resources. If you don't have the necessary permission(s), the entire UpdateStackSet action fails with an access denied error, and the stack set is not updated.

" + }, + "OperationPreferences":{ + "shape":"StackSetOperationPreferences", + "documentation":"

Preferences for how AWS CloudFormation performs this stack set operation.

" + }, + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique ID for this stack set operation.

The operation ID also functions as an idempotency token, to ensure that AWS CloudFormation performs the stack set operation only once, even if you retry the request multiple times. You might retry stack set operation requests to ensure that AWS CloudFormation successfully received them.

If you don't specify an operation ID, AWS CloudFormation generates one automatically.

Repeating this stack set operation with a new operation ID retries all stack instances whose status is OUTDATED.

", + "idempotencyToken":true + } + } + }, + "UpdateStackSetOutput":{ + "type":"structure", + "members":{ + "OperationId":{ + "shape":"ClientRequestToken", + "documentation":"

The unique ID for this stack set operation.

" + } + } + }, + "UpdateTerminationProtectionInput":{ + "type":"structure", + "required":[ + "EnableTerminationProtection", + "StackName" + ], + "members":{ + "EnableTerminationProtection":{ + "shape":"EnableTerminationProtection", + "documentation":"

Whether to enable termination protection on the specified stack.

" + }, + "StackName":{ + "shape":"StackNameOrId", + "documentation":"

The name or unique ID of the stack for which you want to set termination protection.

" + } + } + }, + "UpdateTerminationProtectionOutput":{ + "type":"structure", + "members":{ + "StackId":{ + "shape":"StackId", + "documentation":"

The unique ID of the stack.

" + } + } + }, "Url":{"type":"string"}, "UsePreviousTemplate":{"type":"boolean"}, "UsePreviousValue":{"type":"boolean"}, diff --git a/services/cloudfront/src/main/resources/codegen-resources/service-2.json b/services/cloudfront/src/main/resources/codegen-resources/service-2.json index 1f4064e63e36..092482e1ad55 100644 --- a/services/cloudfront/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudfront/src/main/resources/codegen-resources/service-2.json @@ -241,6 +241,21 @@ ], "documentation":"

Delete a distribution.

" }, + "DeleteServiceLinkedRole":{ + "name":"DeleteServiceLinkedRole2017_03_25", + "http":{ + "method":"DELETE", + "requestUri":"/2017-03-25/service-linked-role/{RoleName}", + "responseCode":204 + }, + "input":{"shape":"DeleteServiceLinkedRoleRequest"}, + "errors":[ + {"shape":"InvalidArgument"}, + {"shape":"AccessDenied"}, + {"shape":"ResourceInUse"}, + {"shape":"NoSuchResource"} + ] + }, "DeleteStreamingDistribution":{ "name":"DeleteStreamingDistribution2017_03_25", "http":{ @@ -783,7 +798,7 @@ "members":{ "Id":{ "shape":"string", - "documentation":"

The ID for the origin access identity. For example: E74FTE3AJFJ256A.

" + "documentation":"

The ID for the origin access identity, for example, E74FTE3AJFJ256A.

" }, "S3CanonicalUserId":{ "shape":"string", @@ -1302,7 +1317,7 @@ "documentation":"

A complex type that contains zero or more Lambda function associations for a cache behavior.

" } }, - "documentation":"

A complex type that describes the default cache behavior if you do not specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" + "documentation":"

A complex type that describes the default cache behavior if you don't specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" }, "DeleteCloudFrontOriginAccessIdentityRequest":{ "type":"structure", @@ -1342,6 +1357,17 @@ }, "documentation":"

This action deletes a web distribution. To delete a web distribution using the CloudFront API, perform the following steps.

To delete a web distribution using the CloudFront API:

  1. Disable the web distribution

  2. Submit a GET Distribution Config request to get the current configuration and the Etag header for the distribution.

  3. Update the XML document that was returned in the response to your GET Distribution Config request to change the value of Enabled to false.

  4. Submit a PUT Distribution Config request to update the configuration for your distribution. In the request body, include the XML document that you updated in Step 3. Set the value of the HTTP If-Match header to the value of the ETag header that CloudFront returned when you submitted the GET Distribution Config request in Step 2.

  5. Review the response to the PUT Distribution Config request to confirm that the distribution was successfully disabled.

  6. Submit a GET Distribution request to confirm that your changes have propagated. When propagation is complete, the value of Status is Deployed.

  7. Submit a DELETE Distribution request. Set the value of the HTTP If-Match header to the value of the ETag header that CloudFront returned when you submitted the GET Distribution Config request in Step 6.

  8. Review the response to your DELETE Distribution request to confirm that the distribution was successfully deleted.

For information about deleting a distribution using the CloudFront console, see Deleting a Distribution in the Amazon CloudFront Developer Guide.

" }, + "DeleteServiceLinkedRoleRequest":{ + "type":"structure", + "required":["RoleName"], + "members":{ + "RoleName":{ + "shape":"string", + "location":"uri", + "locationName":"RoleName" + } + } + }, "DeleteStreamingDistributionRequest":{ "type":"structure", "required":["Id"], @@ -1396,7 +1422,7 @@ }, "DomainName":{ "shape":"string", - "documentation":"

The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net.

" + "documentation":"

The domain name corresponding to the distribution, for example, d111111abcdef8.cloudfront.net.

" }, "ActiveTrustedSigners":{ "shape":"ActiveTrustedSigners", @@ -1438,7 +1464,7 @@ }, "DefaultRootObject":{ "shape":"string", - "documentation":"

The object that you want CloudFront to request from your origin (for example, index.html) when a viewer requests the root URL for your distribution (http://www.example.com) instead of an object in your distribution (http://www.example.com/product-description.html). Specifying a default root object avoids exposing the contents of your distribution.

Specify only the object name, for example, index.html. Do not add a / before the object name.

If you don't want to specify a default root object when you create a distribution, include an empty DefaultRootObject element.

To delete the default root object from an existing distribution, update the distribution configuration and include an empty DefaultRootObject element.

To replace the default root object, update the distribution configuration and specify the new object.

For more information about the default root object, see Creating a Default Root Object in the Amazon CloudFront Developer Guide.

" + "documentation":"

The object that you want CloudFront to request from your origin (for example, index.html) when a viewer requests the root URL for your distribution (http://www.example.com) instead of an object in your distribution (http://www.example.com/product-description.html). Specifying a default root object avoids exposing the contents of your distribution.

Specify only the object name, for example, index.html. Don't add a / before the object name.

If you don't want to specify a default root object when you create a distribution, include an empty DefaultRootObject element.

To delete the default root object from an existing distribution, update the distribution configuration and include an empty DefaultRootObject element.

To replace the default root object, update the distribution configuration and specify the new object.

For more information about the default root object, see Creating a Default Root Object in the Amazon CloudFront Developer Guide.

" }, "Origins":{ "shape":"Origins", @@ -1446,7 +1472,7 @@ }, "DefaultCacheBehavior":{ "shape":"DefaultCacheBehavior", - "documentation":"

A complex type that describes the default cache behavior if you do not specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" + "documentation":"

A complex type that describes the default cache behavior if you don't specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" }, "CacheBehaviors":{ "shape":"CacheBehaviors", @@ -1484,7 +1510,7 @@ }, "IsIPV6Enabled":{ "shape":"boolean", - "documentation":"

If you want CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution, specify true. If you specify false, CloudFront responds to IPv6 DNS requests with the DNS response code NOERROR and with no IP addresses. This allows viewers to submit a second request, for an IPv4 address for your distribution.

In general, you should enable IPv6 if you have users on IPv6 networks who want to access your content. However, if you're using signed URLs or signed cookies to restrict access to your content, and if you're using a custom policy that includes the IpAddress parameter to restrict the IP addresses that can access your content, do not enable IPv6. If you want to restrict access to some content by IP address and not restrict access to other content (or restrict access but not by IP address), you can create two distributions. For more information, see Creating a Signed URL Using a Custom Policy in the Amazon CloudFront Developer Guide.

If you're using an Amazon Route 53 alias resource record set to route traffic to your CloudFront distribution, you need to create a second alias resource record set when both of the following are true:

For more information, see Routing Traffic to an Amazon CloudFront Web Distribution by Using Your Domain Name in the Amazon Route 53 Developer Guide.

If you created a CNAME resource record set, either with Amazon Route 53 or with another DNS service, you don't need to make any changes. A CNAME record will route traffic to your distribution regardless of the IP address format of the viewer request.

" + "documentation":"

If you want CloudFront to respond to IPv6 DNS requests with an IPv6 address for your distribution, specify true. If you specify false, CloudFront responds to IPv6 DNS requests with the DNS response code NOERROR and with no IP addresses. This allows viewers to submit a second request, for an IPv4 address for your distribution.

In general, you should enable IPv6 if you have users on IPv6 networks who want to access your content. However, if you're using signed URLs or signed cookies to restrict access to your content, and if you're using a custom policy that includes the IpAddress parameter to restrict the IP addresses that can access your content, don't enable IPv6. If you want to restrict access to some content by IP address and not restrict access to other content (or restrict access but not by IP address), you can create two distributions. For more information, see Creating a Signed URL Using a Custom Policy in the Amazon CloudFront Developer Guide.

If you're using an Amazon Route 53 alias resource record set to route traffic to your CloudFront distribution, you need to create a second alias resource record set when both of the following are true:

For more information, see Routing Traffic to an Amazon CloudFront Web Distribution by Using Your Domain Name in the Amazon Route 53 Developer Guide.

If you created a CNAME resource record set, either with Amazon Route 53 or with another DNS service, you don't need to make any changes. A CNAME record will route traffic to your distribution regardless of the IP address format of the viewer request.

" } }, "documentation":"

A distribution configuration.

" @@ -1592,7 +1618,7 @@ }, "DomainName":{ "shape":"string", - "documentation":"

The domain name that corresponds to the distribution. For example: d604721fxaaqy9.cloudfront.net.

" + "documentation":"

The domain name that corresponds to the distribution, for example, d111111abcdef8.cloudfront.net.

" }, "Aliases":{ "shape":"Aliases", @@ -1604,7 +1630,7 @@ }, "DefaultCacheBehavior":{ "shape":"DefaultCacheBehavior", - "documentation":"

A complex type that describes the default cache behavior if you do not specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" + "documentation":"

A complex type that describes the default cache behavior if you don't specify a CacheBehavior element or if files don't match any of the values of PathPattern in CacheBehavior elements. You must create exactly one default cache behavior.

" }, "CacheBehaviors":{ "shape":"CacheBehaviors", @@ -1673,7 +1699,7 @@ }, "Headers":{ "shape":"Headers", - "documentation":"

A complex type that specifies the Headers, if any, that you want CloudFront to vary upon for this cache behavior.

" + "documentation":"

A complex type that specifies the Headers, if any, that you want CloudFront to base caching on for this cache behavior.

" }, "QueryStringCacheKeys":{ "shape":"QueryStringCacheKeys", @@ -1691,7 +1717,7 @@ "members":{ "RestrictionType":{ "shape":"GeoRestrictionType", - "documentation":"

The method that you want to use to restrict distribution of your content by country:

" + "documentation":"

The method that you want to use to restrict distribution of your content by country:

" }, "Quantity":{ "shape":"integer", @@ -1699,7 +1725,7 @@ }, "Items":{ "shape":"LocationList", - "documentation":"

A complex type that contains a Location element for each country in which you want CloudFront either to distribute your content (whitelist) or not distribute your content (blacklist).

The Location element is a two-letter, uppercase country code for a country that you want to include in your blacklist or whitelist. Include one Location element for each country.

CloudFront and MaxMind both use ISO 3166 country codes. For the current list of countries and the corresponding codes, see ISO 3166-1-alpha-2 code on the International Organization for Standardization website. You can also refer to the country list in the CloudFront console, which includes both country names and codes.

" + "documentation":"

A complex type that contains a Location element for each country in which you want CloudFront either to distribute your content (whitelist) or not distribute your content (blacklist).

The Location element is a two-letter, uppercase country code for a country that you want to include in your blacklist or whitelist. Include one Location element for each country.

CloudFront and MaxMind both use ISO 3166 country codes. For the current list of countries and the corresponding codes, see ISO 3166-1-alpha-2 code on the International Organization for Standardization website. You can also refer to the country list on the CloudFront console, which includes both country names and codes.

" } }, "documentation":"

A complex type that controls the countries in which your content is distributed. CloudFront determines the location of your users using MaxMind GeoIP databases.

" @@ -1938,14 +1964,14 @@ "members":{ "Quantity":{ "shape":"integer", - "documentation":"

The number of different headers that you want CloudFront to forward to the origin for this cache behavior. You can configure each cache behavior in a web distribution to do one of the following:

" + "documentation":"

The number of different headers that you want CloudFront to base caching on for this cache behavior. You can configure each cache behavior in a web distribution to do one of the following:

Regardless of which option you choose, CloudFront forwards headers to your origin based on whether the origin is an S3 bucket or a custom origin. See the following documentation:

" }, "Items":{ "shape":"HeaderList", - "documentation":"

A complex type that contains one Name element for each header that you want CloudFront to forward to the origin and to vary on for this cache behavior. If Quantity is 0, omit Items.

" + "documentation":"

A list that contains one Name element for each header that you want CloudFront to use for caching in this cache behavior. If Quantity is 0, omit Items.

" } }, - "documentation":"

A complex type that specifies the headers that you want CloudFront to forward to the origin for this cache behavior.

For the headers that you specify, CloudFront also caches separate versions of a specified object based on the header values in viewer requests. For example, suppose viewer requests for logo.jpg contain a custom Product header that has a value of either Acme or Apex, and you configure CloudFront to cache your content based on values in the Product header. CloudFront forwards the Product header to the origin and caches the response from the origin once for each header value. For more information about caching based on header values, see How CloudFront Forwards and Caches Headers in the Amazon CloudFront Developer Guide.

" + "documentation":"

A complex type that specifies the request headers, if any, that you want CloudFront to base caching on for this cache behavior.

For the headers that you specify, CloudFront caches separate versions of a specified object based on the header values in viewer requests. For example, suppose viewer requests for logo.jpg contain a custom product header that has a value of either acme or apex, and you configure CloudFront to cache your content based on values in the product header. CloudFront forwards the product header to the origin and caches the response from the origin once for each header value. For more information about caching based on header values, see How CloudFront Forwards and Caches Headers in the Amazon CloudFront Developer Guide.

" }, "HttpVersion":{ "type":"string", @@ -1968,7 +1994,7 @@ "members":{ "Message":{"shape":"string"} }, - "documentation":"

The value of Quantity and the size of Items do not match.

", + "documentation":"

The value of Quantity and the size of Items don't match.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -2310,11 +2336,11 @@ "members":{ "LambdaFunctionARN":{ "shape":"string", - "documentation":"

The ARN of the Lambda function.

" + "documentation":"

The ARN of the Lambda function. You must specify the ARN of a function version; you can't specify a Lambda alias or $LATEST.

" }, "EventType":{ "shape":"EventType", - "documentation":"

Specifies the event type that triggers a Lambda function invocation. Valid values are:

" + "documentation":"

Specifies the event type that triggers a Lambda function invocation. You can specify the following values:

" } }, "documentation":"

A complex type that contains a Lambda function association.

" @@ -2543,11 +2569,11 @@ "members":{ "Enabled":{ "shape":"boolean", - "documentation":"

Specifies whether you want CloudFront to save access logs to an Amazon S3 bucket. If you do not want to enable logging when you create a distribution or if you want to disable logging for an existing distribution, specify false for Enabled, and specify empty Bucket and Prefix elements. If you specify false for Enabled but you specify values for Bucket, prefix, and IncludeCookies, the values are automatically deleted.

" + "documentation":"

Specifies whether you want CloudFront to save access logs to an Amazon S3 bucket. If you don't want to enable logging when you create a distribution or if you want to disable logging for an existing distribution, specify false for Enabled, and specify empty Bucket and Prefix elements. If you specify false for Enabled but you specify values for Bucket, prefix, and IncludeCookies, the values are automatically deleted.

" }, "IncludeCookies":{ "shape":"boolean", - "documentation":"

Specifies whether you want CloudFront to include cookies in access logs, specify true for IncludeCookies. If you choose to include cookies in logs, CloudFront logs all cookies regardless of how you configure the cache behaviors for this distribution. If you do not want to include cookies when you create a distribution or if you want to disable include cookies for an existing distribution, specify false for IncludeCookies.

" + "documentation":"

Specifies whether you want CloudFront to include cookies in access logs, specify true for IncludeCookies. If you choose to include cookies in logs, CloudFront logs all cookies regardless of how you configure the cache behaviors for this distribution. If you don't want to include cookies when you create a distribution or if you want to disable include cookies for an existing distribution, specify false for IncludeCookies.

" }, "Bucket":{ "shape":"string", @@ -2555,7 +2581,7 @@ }, "Prefix":{ "shape":"string", - "documentation":"

An optional string that you want CloudFront to prefix to the access log filenames for this distribution, for example, myprefix/. If you want to enable logging, but you do not want to specify a prefix, you still must include an empty Prefix element in the Logging element.

" + "documentation":"

An optional string that you want CloudFront to prefix to the access log filenames for this distribution, for example, myprefix/. If you want to enable logging, but you don't want to specify a prefix, you still must include an empty Prefix element in the Logging element.

" } }, "documentation":"

A complex type that controls whether access logs are written for the distribution.

" @@ -2583,7 +2609,10 @@ "type":"string", "enum":[ "SSLv3", - "TLSv1" + "TLSv1", + "TLSv1_2016", + "TLSv1.1_2016", + "TLSv1.2_2018" ] }, "MissingBody":{ @@ -2591,7 +2620,7 @@ "members":{ "Message":{"shape":"string"} }, - "documentation":"

This operation requires a body. Ensure that the body is present and the Content-Type header is set.

", + "documentation":"

This operation requires a body. Ensure that the body is present and the Content-Type header is set.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -2661,7 +2690,7 @@ }, "DomainName":{ "shape":"string", - "documentation":"

Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want CloudFront to get objects for this origin, for example, myawsbucket.s3.amazonaws.com.

Constraints for Amazon S3 origins:

Custom Origins: The DNS domain name for the HTTP server from which you want CloudFront to get objects for this origin, for example, www.example.com.

Constraints for custom origins:

" + "documentation":"

Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want CloudFront to get objects for this origin, for example, myawsbucket.s3.amazonaws.com.

Constraints for Amazon S3 origins:

Custom Origins: The DNS domain name for the HTTP server from which you want CloudFront to get objects for this origin, for example, www.example.com.

Constraints for custom origins:

" }, "OriginPath":{ "shape":"string", @@ -2820,6 +2849,14 @@ "type":"string", "pattern":"arn:aws:cloudfront::[0-9]+:.*" }, + "ResourceInUse":{ + "type":"structure", + "members":{ + "Message":{"shape":"string"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, "Restrictions":{ "type":"structure", "required":["GeoRestriction"], @@ -2927,7 +2964,7 @@ }, "DomainName":{ "shape":"string", - "documentation":"

The domain name that corresponds to the streaming distribution. For example: s5c39gqb8ow64r.cloudfront.net.

" + "documentation":"

The domain name that corresponds to the streaming distribution, for example, s5c39gqb8ow64r.cloudfront.net.

" }, "ActiveTrustedSigners":{ "shape":"ActiveTrustedSigners", @@ -3073,7 +3110,7 @@ "members":{ "Id":{ "shape":"string", - "documentation":"

The identifier for the distribution. For example: EDFDVBD632BHDS5.

" + "documentation":"

The identifier for the distribution, for example, EDFDVBD632BHDS5.

" }, "ARN":{ "shape":"string", @@ -3089,7 +3126,7 @@ }, "DomainName":{ "shape":"string", - "documentation":"

The domain name corresponding to the distribution. For example: d604721fxaaqy9.cloudfront.net.

" + "documentation":"

The domain name corresponding to the distribution, for example, d111111abcdef8.cloudfront.net.

" }, "S3Origin":{ "shape":"S3Origin", @@ -3132,7 +3169,7 @@ "members":{ "Enabled":{ "shape":"boolean", - "documentation":"

Specifies whether you want CloudFront to save access logs to an Amazon S3 bucket. If you do not want to enable logging when you create a streaming distribution or if you want to disable logging for an existing streaming distribution, specify false for Enabled, and specify empty Bucket and Prefix elements. If you specify false for Enabled but you specify values for Bucket and Prefix, the values are automatically deleted.

" + "documentation":"

Specifies whether you want CloudFront to save access logs to an Amazon S3 bucket. If you don't want to enable logging when you create a streaming distribution or if you want to disable logging for an existing streaming distribution, specify false for Enabled, and specify empty Bucket and Prefix elements. If you specify false for Enabled but you specify values for Bucket and Prefix, the values are automatically deleted.

" }, "Bucket":{ "shape":"string", @@ -3140,7 +3177,7 @@ }, "Prefix":{ "shape":"string", - "documentation":"

An optional string that you want CloudFront to prefix to the access log filenames for this streaming distribution, for example, myprefix/. If you want to enable logging, but you do not want to specify a prefix, you still must include an empty Prefix element in the Logging element.

" + "documentation":"

An optional string that you want CloudFront to prefix to the access log filenames for this streaming distribution, for example, myprefix/. If you want to enable logging, but you don't want to specify a prefix, you still must include an empty Prefix element in the Logging element.

" } }, "documentation":"

A complex type that controls whether access logs are written for this streaming distribution.

" @@ -3375,7 +3412,7 @@ "members":{ "Message":{"shape":"string"} }, - "documentation":"

One or more of your trusted signers do not exist.

", + "documentation":"

One or more of your trusted signers don't exist.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -3565,29 +3602,38 @@ "ViewerCertificate":{ "type":"structure", "members":{ - "CloudFrontDefaultCertificate":{"shape":"boolean"}, - "IAMCertificateId":{"shape":"string"}, - "ACMCertificateArn":{"shape":"string"}, + "CloudFrontDefaultCertificate":{ + "shape":"boolean", + "documentation":"

For information about how and when to use CloudFrontDefaultCertificate, see ViewerCertificate.

" + }, + "IAMCertificateId":{ + "shape":"string", + "documentation":"

For information about how and when to use IAMCertificateId, see ViewerCertificate.

" + }, + "ACMCertificateArn":{ + "shape":"string", + "documentation":"

For information about how and when to use ACMCertificateArn, see ViewerCertificate.

" + }, "SSLSupportMethod":{ "shape":"SSLSupportMethod", - "documentation":"

If you specify a value for ACMCertificateArn or for IAMCertificateId, you must also specify how you want CloudFront to serve HTTPS requests: using a method that works for all clients or one that works for most clients:

Do not specify a value for SSLSupportMethod if you specified <CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>.

For more information, see Using Alternate Domain Names and HTTPS in the Amazon CloudFront Developer Guide.

" + "documentation":"

If you specify a value for ViewerCertificate$ACMCertificateArn or for ViewerCertificate$IAMCertificateId, you must also specify how you want CloudFront to serve HTTPS requests: using a method that works for all clients or one that works for most clients:

Don't specify a value for SSLSupportMethod if you specified <CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>.

For more information, see Using Alternate Domain Names and HTTPS in the Amazon CloudFront Developer Guide.

" }, "MinimumProtocolVersion":{ "shape":"MinimumProtocolVersion", - "documentation":"

Specify the minimum version of the SSL/TLS protocol that you want CloudFront to use for HTTPS connections between viewers and CloudFront: SSLv3 or TLSv1. CloudFront serves your objects only to viewers that support SSL/TLS version that you specify and later versions. The TLSv1 protocol is more secure, so we recommend that you specify SSLv3 only if your users are using browsers or devices that don't support TLSv1. Note the following:

" + "documentation":"

Specify the security policy that you want CloudFront to use for HTTPS connections. A security policy determines two settings:

On the CloudFront console, this setting is called Security policy.

We recommend that you specify TLSv1.1_2016 unless your users are using browsers or devices that do not support TLSv1.1 or later.

When both of the following are true, you must specify TLSv1 or later for the security policy:

If you specify true for CloudFrontDefaultCertificate, CloudFront automatically sets the security policy to TLSv1 regardless of the value that you specify for MinimumProtocolVersion.

For information about the relationship between the security policy that you choose and the protocols and ciphers that CloudFront uses to communicate with viewers, see Supported SSL/TLS Protocols and Ciphers for Communication Between Viewers and CloudFront in the Amazon CloudFront Developer Guide.

" }, "Certificate":{ "shape":"string", - "documentation":"

Include one of these values to specify the following:

You must specify one (and only one) of the three values. Do not specify false for CloudFrontDefaultCertificate.

If you want viewers to use HTTP to request your objects: Specify the following value:

<CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>

In addition, specify allow-all for ViewerProtocolPolicy for all of your cache behaviors.

If you want viewers to use HTTPS to request your objects: Choose the type of certificate that you want to use based on whether you're using an alternate domain name for your objects or the CloudFront domain name:

", + "documentation":"

This field has been deprecated. Use one of the following fields instead:

", "deprecated":true }, "CertificateSource":{ "shape":"CertificateSource", - "documentation":"

This field is deprecated. You can use one of the following: [ACMCertificateArn, IAMCertificateId, or CloudFrontDefaultCertificate].

", + "documentation":"

This field has been deprecated. Use one of the following fields instead:

", "deprecated":true } }, - "documentation":"

A complex type that specifies the following:

For more information, see Using an HTTPS Connection to Access Your Objects in the Amazon Amazon CloudFront Developer Guide.

" + "documentation":"

A complex type that specifies the following:

You must specify only one of the following values:

Don't specify false for CloudFrontDefaultCertificate.

If you want viewers to use HTTP instead of HTTPS to request your objects: Specify the following value:

<CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>

In addition, specify allow-all for ViewerProtocolPolicy for all of your cache behaviors.

If you want viewers to use HTTPS to request your objects: Choose the type of certificate that you want to use based on whether you're using an alternate domain name for your objects or the CloudFront domain name:

If you want viewers to use HTTPS, you must also specify one of the following values in your cache behaviors:

You can also optionally require that CloudFront use HTTPS to communicate with your origin by specifying one of the following values for the applicable origins:

For more information, see Using Alternate Domain Names and HTTPS in the Amazon CloudFront Developer Guide.

" }, "ViewerProtocolPolicy":{ "type":"string", @@ -3603,5 +3649,5 @@ "string":{"type":"string"}, "timestamp":{"type":"timestamp"} }, - "documentation":"Amazon CloudFront

This is the Amazon CloudFront API Reference. This guide is for developers who need detailed information about the CloudFront API actions, data types, and errors. For detailed information about CloudFront features and their associated API calls, see the Amazon CloudFront Developer Guide.

" + "documentation":"Amazon CloudFront

This is the Amazon CloudFront API Reference. This guide is for developers who need detailed information about CloudFront API actions, data types, and errors. For detailed information about CloudFront features, see the Amazon CloudFront Developer Guide.

" } diff --git a/services/cloudhsm/src/main/resources/codegen-resources/service-2.json b/services/cloudhsm/src/main/resources/codegen-resources/service-2.json index b854e69a50e3..139daddb6746 100644 --- a/services/cloudhsm/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudhsm/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"CloudHSM", "serviceFullName":"Amazon CloudHSM", + "serviceId":"CloudHSM", "signatureVersion":"v4", "targetPrefix":"CloudHsmFrontendService", "uid":"cloudhsm-2014-05-30" @@ -25,7 +26,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Adds or overwrites one or more tags for the specified AWS CloudHSM resource.

Each tag consists of a key and a value. Tag keys must be unique to each resource.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Adds or overwrites one or more tags for the specified AWS CloudHSM resource.

Each tag consists of a key and a value. Tag keys must be unique to each resource.

" }, "CreateHapg":{ "name":"CreateHapg", @@ -40,7 +41,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Creates a high-availability partition group. A high-availability partition group is a group of partitions that spans multiple physical HSMs.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Creates a high-availability partition group. A high-availability partition group is a group of partitions that spans multiple physical HSMs.

" }, "CreateHsm":{ "name":"CreateHsm", @@ -55,7 +56,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Creates an uninitialized HSM instance.

There is an upfront fee charged for each HSM instance that you create with the CreateHsm operation. If you accidentally provision an HSM and want to request a refund, delete the instance using the DeleteHsm operation, go to the AWS Support Center, create a new case, and select Account and Billing Support.

It can take up to 20 minutes to create and provision an HSM. You can monitor the status of the HSM with the DescribeHsm operation. The HSM is ready to be initialized when the status changes to RUNNING.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Creates an uninitialized HSM instance.

There is an upfront fee charged for each HSM instance that you create with the CreateHsm operation. If you accidentally provision an HSM and want to request a refund, delete the instance using the DeleteHsm operation, go to the AWS Support Center, create a new case, and select Account and Billing Support.

It can take up to 20 minutes to create and provision an HSM. You can monitor the status of the HSM with the DescribeHsm operation. The HSM is ready to be initialized when the status changes to RUNNING.

" }, "CreateLunaClient":{ "name":"CreateLunaClient", @@ -70,7 +71,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Creates an HSM client.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Creates an HSM client.

" }, "DeleteHapg":{ "name":"DeleteHapg", @@ -85,7 +86,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Deletes a high-availability partition group.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Deletes a high-availability partition group.

" }, "DeleteHsm":{ "name":"DeleteHsm", @@ -100,7 +101,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Deletes an HSM. After completion, this operation cannot be undone and your key material cannot be recovered.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Deletes an HSM. After completion, this operation cannot be undone and your key material cannot be recovered.

" }, "DeleteLunaClient":{ "name":"DeleteLunaClient", @@ -115,7 +116,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Deletes a client.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Deletes a client.

" }, "DescribeHapg":{ "name":"DescribeHapg", @@ -130,7 +131,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves information about a high-availability partition group.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Retrieves information about a high-availability partition group.

" }, "DescribeHsm":{ "name":"DescribeHsm", @@ -145,7 +146,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves information about an HSM. You can identify the HSM by its ARN or its serial number.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Retrieves information about an HSM. You can identify the HSM by its ARN or its serial number.

" }, "DescribeLunaClient":{ "name":"DescribeLunaClient", @@ -160,7 +161,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves information about an HSM client.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Retrieves information about an HSM client.

" }, "GetConfig":{ "name":"GetConfig", @@ -175,7 +176,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Gets the configuration files necessary to connect to all high availability partition groups the client is associated with.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Gets the configuration files necessary to connect to all high availability partition groups the client is associated with.

" }, "ListAvailableZones":{ "name":"ListAvailableZones", @@ -190,7 +191,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Lists the Availability Zones that have available AWS CloudHSM capacity.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Lists the Availability Zones that have available AWS CloudHSM capacity.

" }, "ListHapgs":{ "name":"ListHapgs", @@ -205,7 +206,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Lists the high-availability partition groups for the account.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListHapgs to retrieve the next set of items.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Lists the high-availability partition groups for the account.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListHapgs to retrieve the next set of items.

" }, "ListHsms":{ "name":"ListHsms", @@ -220,7 +221,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves the identifiers of all of the HSMs provisioned for the current customer.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListHsms to retrieve the next set of items.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Retrieves the identifiers of all of the HSMs provisioned for the current customer.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListHsms to retrieve the next set of items.

" }, "ListLunaClients":{ "name":"ListLunaClients", @@ -235,7 +236,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Lists all of the clients.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListLunaClients to retrieve the next set of items.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Lists all of the clients.

This operation supports pagination with the use of the NextToken member. If more results are available, the NextToken member of the response contains a token that you pass in the next call to ListLunaClients to retrieve the next set of items.

" }, "ListTagsForResource":{ "name":"ListTagsForResource", @@ -250,7 +251,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Returns a list of all tags for the specified AWS CloudHSM resource.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Returns a list of all tags for the specified AWS CloudHSM resource.

" }, "ModifyHapg":{ "name":"ModifyHapg", @@ -265,7 +266,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Modifies an existing high-availability partition group.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Modifies an existing high-availability partition group.

" }, "ModifyHsm":{ "name":"ModifyHsm", @@ -280,7 +281,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Modifies an HSM.

This operation can result in the HSM being offline for up to 15 minutes while the AWS CloudHSM service is reconfigured. If you are modifying a production HSM, you should ensure that your AWS CloudHSM service is configured for high availability, and consider executing this operation during a maintenance window.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Modifies an HSM.

This operation can result in the HSM being offline for up to 15 minutes while the AWS CloudHSM service is reconfigured. If you are modifying a production HSM, you should ensure that your AWS CloudHSM service is configured for high availability, and consider executing this operation during a maintenance window.

" }, "ModifyLunaClient":{ "name":"ModifyLunaClient", @@ -293,7 +294,7 @@ "errors":[ {"shape":"CloudHsmServiceException"} ], - "documentation":"

Modifies the certificate used by the client.

This action can potentially start a workflow to install the new certificate on the client's HSMs.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Modifies the certificate used by the client.

This action can potentially start a workflow to install the new certificate on the client's HSMs.

" }, "RemoveTagsFromResource":{ "name":"RemoveTagsFromResource", @@ -308,7 +309,7 @@ {"shape":"CloudHsmInternalException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Removes one or more tags from the specified AWS CloudHSM resource.

To remove a tag, specify only the tag key to remove (not the value). To overwrite the value for an existing tag, use AddTagsToResource.

" + "documentation":"

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

Removes one or more tags from the specified AWS CloudHSM resource.

To remove a tag, specify only the tag key to remove (not the value). To overwrite the value for an existing tag, use AddTagsToResource.

" } }, "shapes":{ @@ -464,7 +465,7 @@ }, "ExternalId":{ "shape":"ExternalId", - "documentation":"

The external ID from IamRoleArn, if present.

", + "documentation":"

The external ID from IamRoleArn, if present.

", "locationName":"ExternalId" }, "SubscriptionType":{ @@ -482,7 +483,7 @@ "locationName":"SyslogIp" } }, - "documentation":"

Contains the inputs for the CreateHsm operation.

", + "documentation":"

Contains the inputs for the CreateHsm operation.

", "locationName":"CreateHsmRequest" }, "CreateHsmResponse":{ @@ -493,7 +494,7 @@ "documentation":"

The ARN of the HSM.

" } }, - "documentation":"

Contains the output of the CreateHsm operation.

" + "documentation":"

Contains the output of the CreateHsm operation.

" }, "CreateLunaClientRequest":{ "type":"structure", @@ -608,9 +609,18 @@ "shape":"String", "documentation":"

The serial number of the high-availability partition group.

" }, - "HsmsLastActionFailed":{"shape":"HsmList"}, - "HsmsPendingDeletion":{"shape":"HsmList"}, - "HsmsPendingRegistration":{"shape":"HsmList"}, + "HsmsLastActionFailed":{ + "shape":"HsmList", + "documentation":"

" + }, + "HsmsPendingDeletion":{ + "shape":"HsmList", + "documentation":"

" + }, + "HsmsPendingRegistration":{ + "shape":"HsmList", + "documentation":"

" + }, "Label":{ "shape":"Label", "documentation":"

The label for the high-availability partition group.

" @@ -635,14 +645,14 @@ "members":{ "HsmArn":{ "shape":"HsmArn", - "documentation":"

The ARN of the HSM. Either the HsmArn or the SerialNumber parameter must be specified.

" + "documentation":"

The ARN of the HSM. Either the HsmArn or the SerialNumber parameter must be specified.

" }, "HsmSerialNumber":{ "shape":"HsmSerialNumber", - "documentation":"

The serial number of the HSM. Either the HsmArn or the HsmSerialNumber parameter must be specified.

" + "documentation":"

The serial number of the HSM. Either the HsmArn or the HsmSerialNumber parameter must be specified.

" } }, - "documentation":"

Contains the inputs for the DescribeHsm operation.

" + "documentation":"

Contains the inputs for the DescribeHsm operation.

" }, "DescribeHsmResponse":{ "type":"structure", @@ -889,7 +899,7 @@ "members":{ "NextToken":{ "shape":"PaginationToken", - "documentation":"

The NextToken value from a previous call to ListHapgs. Pass null if this is the first call.

" + "documentation":"

The NextToken value from a previous call to ListHapgs. Pass null if this is the first call.

" } } }, @@ -903,7 +913,7 @@ }, "NextToken":{ "shape":"PaginationToken", - "documentation":"

If not null, more results are available. Pass this value to ListHapgs to retrieve the next set of items.

" + "documentation":"

If not null, more results are available. Pass this value to ListHapgs to retrieve the next set of items.

" } } }, @@ -912,7 +922,7 @@ "members":{ "NextToken":{ "shape":"PaginationToken", - "documentation":"

The NextToken value from a previous call to ListHsms. Pass null if this is the first call.

" + "documentation":"

The NextToken value from a previous call to ListHsms. Pass null if this is the first call.

" } } }, @@ -925,17 +935,17 @@ }, "NextToken":{ "shape":"PaginationToken", - "documentation":"

If not null, more results are available. Pass this value to ListHsms to retrieve the next set of items.

" + "documentation":"

If not null, more results are available. Pass this value to ListHsms to retrieve the next set of items.

" } }, - "documentation":"

Contains the output of the ListHsms operation.

" + "documentation":"

Contains the output of the ListHsms operation.

" }, "ListLunaClientsRequest":{ "type":"structure", "members":{ "NextToken":{ "shape":"PaginationToken", - "documentation":"

The NextToken value from a previous call to ListLunaClients. Pass null if this is the first call.

" + "documentation":"

The NextToken value from a previous call to ListLunaClients. Pass null if this is the first call.

" } } }, @@ -949,7 +959,7 @@ }, "NextToken":{ "shape":"PaginationToken", - "documentation":"

If not null, more results are available. Pass this to ListLunaClients to retrieve the next set of items.

" + "documentation":"

If not null, more results are available. Pass this to ListLunaClients to retrieve the next set of items.

" } } }, @@ -1135,7 +1145,7 @@ }, "SubscriptionType":{ "type":"string", - "documentation":"

Specifies the type of subscription for the HSM.

", + "documentation":"

Specifies the type of subscription for the HSM.

", "enum":["PRODUCTION"] }, "Tag":{ @@ -1183,5 +1193,5 @@ "pattern":"vpc-[0-9a-f]{8}" } }, - "documentation":"AWS CloudHSM Service" + "documentation":"AWS CloudHSM Service

This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference.

For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

" } diff --git a/services/cloudwatch/src/main/resources/codegen-resources/customization.config b/services/cloudwatch/src/main/resources/codegen-resources/customization.config index ffc5cfd4c7ac..c112983e7bfd 100644 --- a/services/cloudwatch/src/main/resources/codegen-resources/customization.config +++ b/services/cloudwatch/src/main/resources/codegen-resources/customization.config @@ -13,5 +13,9 @@ "DescribeAlarmHistory" : { "methodForms" : [[ ]] } - } + }, + "blacklistedSimpleMethods" : [ + "deleteDashboards", + "putDashboard" + ] } diff --git a/services/cloudwatch/src/main/resources/codegen-resources/service-2.json b/services/cloudwatch/src/main/resources/codegen-resources/service-2.json index 822981dba1fc..8a9bc2915802 100644 --- a/services/cloudwatch/src/main/resources/codegen-resources/service-2.json +++ b/services/cloudwatch/src/main/resources/codegen-resources/service-2.json @@ -23,6 +23,24 @@ ], "documentation":"

Deletes the specified alarms. In the event of an error, no alarms are deleted.

" }, + "DeleteDashboards":{ + "name":"DeleteDashboards", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDashboardsInput"}, + "output":{ + "shape":"DeleteDashboardsOutput", + "resultWrapper":"DeleteDashboardsResult" + }, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"DashboardNotFoundError"}, + {"shape":"InternalServiceFault"} + ], + "documentation":"

Deletes all dashboards that you specify. You may specify up to 100 dashboards to delete. If there is an error during this call, no dashboards are deleted.

" + }, "DescribeAlarmHistory":{ "name":"DescribeAlarmHistory", "http":{ @@ -37,7 +55,7 @@ "errors":[ {"shape":"InvalidNextToken"} ], - "documentation":"

Retrieves the history for the specified alarm. You can filter the results by date range or item type. If an alarm name is not specified, the histories for all alarms are returned.

Note that Amazon CloudWatch retains the history of an alarm even if you delete the alarm.

" + "documentation":"

Retrieves the history for the specified alarm. You can filter the results by date range or item type. If an alarm name is not specified, the histories for all alarms are returned.

CloudWatch retains the history of an alarm even if you delete the alarm.

" }, "DescribeAlarms":{ "name":"DescribeAlarms", @@ -66,7 +84,7 @@ "shape":"DescribeAlarmsForMetricOutput", "resultWrapper":"DescribeAlarmsForMetricResult" }, - "documentation":"

Retrieves the alarms for the specified metric. Specify a statistic, period, or unit to filter the results.

" + "documentation":"

Retrieves the alarms for the specified metric. To filter the results, specify a statistic, period, or unit.

" }, "DisableAlarmActions":{ "name":"DisableAlarmActions", @@ -86,6 +104,24 @@ "input":{"shape":"EnableAlarmActionsInput"}, "documentation":"

Enables the actions for the specified alarms.

" }, + "GetDashboard":{ + "name":"GetDashboard", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDashboardInput"}, + "output":{ + "shape":"GetDashboardOutput", + "resultWrapper":"GetDashboardResult" + }, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"DashboardNotFoundError"}, + {"shape":"InternalServiceFault"} + ], + "documentation":"

Displays the details of the dashboard that you specify.

To copy an existing dashboard, use GetDashboard, and then use the data returned within DashboardBody as the template for the new dashboard when you call PutDashboard to create the copy.

" + }, "GetMetricStatistics":{ "name":"GetMetricStatistics", "http":{ @@ -103,7 +139,24 @@ {"shape":"InvalidParameterCombinationException"}, {"shape":"InternalServiceFault"} ], - "documentation":"

Gets statistics for the specified metric.

Amazon CloudWatch retains metric data as follows:

Note that CloudWatch started retaining 5-minute and 1-hour metric data as of 9 July 2016.

The maximum number of data points returned from a single call is 1,440. If you request more than 1,440 data points, Amazon CloudWatch returns an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. A period can be as short as one minute (60 seconds). Note that data points are not returned in chronological order.

Amazon CloudWatch aggregates data points based on the length of the period that you specify. For example, if you request statistics with a one-hour period, Amazon CloudWatch aggregates all data points with time stamps that fall within each one-hour period. Therefore, the number of values aggregated by CloudWatch is larger than the number of data points returned.

CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you cannot retrieve percentile statistics for this data unless one of the following conditions is true:

For a list of metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and Dimensions Reference in the Amazon CloudWatch User Guide.

" + "documentation":"

Gets statistics for the specified metric.

The maximum number of data points returned from a single call is 1,440. If you request more than 1,440 data points, CloudWatch returns an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. Data points are not returned in chronological order.

CloudWatch aggregates data points based on the length of the period that you specify. For example, if you request statistics with a one-hour period, CloudWatch aggregates all data points with time stamps that fall within each one-hour period. Therefore, the number of values aggregated by CloudWatch is larger than the number of data points returned.

CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true:

Amazon CloudWatch retains metric data as follows:

Data points that are initially published with a shorter period are aggregated together for long-term storage. For example, if you collect data using a period of 1 minute, the data remains available for 15 days with 1-minute resolution. After 15 days, this data is still available, but is aggregated and retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour.

CloudWatch started retaining 5-minute and 1-hour metric data as of July 9, 2016.

For information about metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and Dimensions Reference in the Amazon CloudWatch User Guide.

" + }, + "ListDashboards":{ + "name":"ListDashboards", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListDashboardsInput"}, + "output":{ + "shape":"ListDashboardsOutput", + "resultWrapper":"ListDashboardsResult" + }, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"InternalServiceFault"} + ], + "documentation":"

Returns a list of the dashboards for your account. If you include DashboardNamePrefix, only those dashboards with names starting with the prefix are listed. Otherwise, all dashboards in your account are listed.

" }, "ListMetrics":{ "name":"ListMetrics", @@ -122,6 +175,23 @@ ], "documentation":"

List the specified metrics. You can use the returned metrics with GetMetricStatistics to obtain statistical data.

Up to 500 results are returned for any one call. To retrieve additional results, use the returned token with subsequent calls.

After you create a metric, allow up to fifteen minutes before the metric appears. Statistics about the metric, however, are available sooner using GetMetricStatistics.

" }, + "PutDashboard":{ + "name":"PutDashboard", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutDashboardInput"}, + "output":{ + "shape":"PutDashboardOutput", + "resultWrapper":"PutDashboardResult" + }, + "errors":[ + {"shape":"DashboardInvalidInputError"}, + {"shape":"InternalServiceFault"} + ], + "documentation":"

Creates a dashboard if it does not already exist, or updates an existing dashboard. If you update a dashboard, the entire contents are replaced with what you specify here.

You can have up to 500 dashboards per account. All dashboards in your account are global, not region-specific.

A simple way to create a dashboard using PutDashboard is to copy an existing dashboard. To copy an existing dashboard using the console, you can load the dashboard and then use the View/edit source command in the Actions menu to display the JSON block for that dashboard. Another way to copy a dashboard is to use GetDashboard, and then use the data returned within DashboardBody as the template for the new dashboard when you call PutDashboard.

When you create a dashboard with PutDashboard, a good practice is to add a text widget at the top of the dashboard with a message that the dashboard was created by script and should not be changed in the console. This message could also point console users to the location of the DashboardBody script or the CloudFormation template used to create the dashboard.

" + }, "PutMetricAlarm":{ "name":"PutMetricAlarm", "http":{ @@ -132,7 +202,7 @@ "errors":[ {"shape":"LimitExceededFault"} ], - "documentation":"

Creates or updates an alarm and associates it with the specified metric. Optionally, this operation can associate one or more Amazon SNS resources with the alarm.

When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its state is set appropriately. Any actions associated with the state are then executed.

When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.

If you are an AWS Identity and Access Management (IAM) user, you must have Amazon EC2 permissions for some operations:

If you have read/write permissions for Amazon CloudWatch but not for Amazon EC2, you can still create an alarm, but the stop or terminate actions won't be performed. However, if you are later granted the required permissions, the alarm actions that you created earlier will be performed.

If you are using an IAM role (for example, an Amazon EC2 instance profile), you cannot stop or terminate the instance using alarm actions. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.

If you are using temporary security credentials granted using the AWS Security Token Service (AWS STS), you cannot stop or terminate an Amazon EC2 instance using alarm actions.

Note that you must create at least one stop, terminate, or reboot alarm using the Amazon EC2 or CloudWatch console to create the EC2ActionsAccess IAM role. After this IAM role is created, you can create stop, terminate, or reboot alarms using a command-line interface or an API.

" + "documentation":"

Creates or updates an alarm and associates it with the specified metric. Optionally, this operation can associate one or more Amazon SNS resources with the alarm.

When this operation creates an alarm, the alarm state is immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its state is set appropriately. Any actions associated with the state are then executed.

When you update an existing alarm, its state is left unchanged, but the update completely overwrites the previous configuration of the alarm.

If you are an IAM user, you must have Amazon EC2 permissions for some operations:

If you have read/write permissions for Amazon CloudWatch but not for Amazon EC2, you can still create an alarm, but the stop or terminate actions are not performed. However, if you are later granted the required permissions, the alarm actions that you created earlier are performed.

If you are using an IAM role (for example, an EC2 instance profile), you cannot stop or terminate the instance using alarm actions. However, you can still see the alarm state and perform any other actions such as Amazon SNS notifications or Auto Scaling policies.

If you are using temporary security credentials granted using AWS STS, you cannot stop or terminate an EC2 instance using alarm actions.

You must create at least one stop, terminate, or reboot alarm using either the Amazon EC2 or CloudWatch consoles to create the EC2ActionsAccess IAM role. After this IAM role is created, you can create stop, terminate, or reboot alarms using a command-line interface or API.

" }, "PutMetricData":{ "name":"PutMetricData", @@ -147,7 +217,7 @@ {"shape":"InvalidParameterCombinationException"}, {"shape":"InternalServiceFault"} ], - "documentation":"

Publishes metric data points to Amazon CloudWatch. Amazon CloudWatch associates the data points with the specified metric. If the specified metric does not exist, Amazon CloudWatch creates the metric. When Amazon CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics.

Each PutMetricData request is limited to 40 KB in size for HTTP POST requests.

Although the Value parameter accepts numbers of type Double, Amazon CloudWatch rejects values that are either too small or too large. Values must be in the range of 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2). In addition, special values (e.g., NaN, +Infinity, -Infinity) are not supported.

You can use up to 10 dimensions per metric to further clarify what data the metric collects. For more information on specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.

Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricStatistics from the time they are submitted.

CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you cannot retrieve percentile statistics for this data unless one of the following conditions is true:

" + "documentation":"

Publishes metric data points to Amazon CloudWatch. CloudWatch associates the data points with the specified metric. If the specified metric does not exist, CloudWatch creates the metric. When CloudWatch creates a metric, it can take up to fifteen minutes for the metric to appear in calls to ListMetrics.

Each PutMetricData request is limited to 40 KB in size for HTTP POST requests.

Although the Value parameter accepts numbers of type Double, CloudWatch rejects values that are either too small or too large. Values must be in the range of 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2). In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.

You can use up to 10 dimensions per metric to further clarify what data the metric collects. For more information about specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.

Data points with time stamps from 24 hours ago or longer can take at least 48 hours to become available for GetMetricStatistics from the time they are submitted.

CloudWatch needs raw data points to calculate percentile statistics. If you publish data using a statistic set instead, you can only retrieve percentile statistics for this data if one of the following conditions is true:

" }, "SetAlarmState":{ "name":"SetAlarmState", @@ -160,7 +230,7 @@ {"shape":"ResourceNotFound"}, {"shape":"InvalidFormatFault"} ], - "documentation":"

Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to ALARM sends an Amazon SNS message. The alarm returns to its actual state (often within seconds). Because the alarm state change happens very quickly, it is typically only visible in the alarm's History tab in the Amazon CloudWatch console or through DescribeAlarmHistory.

" + "documentation":"

Temporarily sets the state of an alarm for testing purposes. When the updated state differs from the previous value, the action configured for the appropriate state is invoked. For example, if your alarm is configured to send an Amazon SNS message when an alarm is triggered, temporarily changing the alarm state to ALARM sends an SNS message. The alarm returns to its actual state (often within seconds). Because the alarm state change happens quickly, it is typically only visible in the alarm's History tab in the Amazon CloudWatch console or through DescribeAlarmHistory.

" } }, "shapes":{ @@ -235,6 +305,87 @@ "LessThanOrEqualToThreshold" ] }, + "DashboardArn":{"type":"string"}, + "DashboardBody":{"type":"string"}, + "DashboardEntries":{ + "type":"list", + "member":{"shape":"DashboardEntry"} + }, + "DashboardEntry":{ + "type":"structure", + "members":{ + "DashboardName":{ + "shape":"DashboardName", + "documentation":"

The name of the dashboard.

" + }, + "DashboardArn":{ + "shape":"DashboardArn", + "documentation":"

The Amazon Resource Name (ARN) of the dashboard.

" + }, + "LastModified":{ + "shape":"LastModified", + "documentation":"

The time stamp of when the dashboard was last modified, either by an API call or through the console. This number is expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + }, + "Size":{ + "shape":"Size", + "documentation":"

The size of the dashboard, in bytes.

" + } + }, + "documentation":"

Represents a specific dashboard.

" + }, + "DashboardErrorMessage":{"type":"string"}, + "DashboardInvalidInputError":{ + "type":"structure", + "members":{ + "message":{"shape":"DashboardErrorMessage"}, + "dashboardValidationMessages":{"shape":"DashboardValidationMessages"} + }, + "documentation":"

Some part of the dashboard data is invalid.

", + "error":{ + "code":"InvalidParameterInput", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "DashboardName":{"type":"string"}, + "DashboardNamePrefix":{"type":"string"}, + "DashboardNames":{ + "type":"list", + "member":{"shape":"DashboardName"} + }, + "DashboardNotFoundError":{ + "type":"structure", + "members":{ + "message":{"shape":"DashboardErrorMessage"} + }, + "documentation":"

The specified dashboard does not exist.

", + "error":{ + "code":"ResourceNotFound", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "DashboardValidationMessage":{ + "type":"structure", + "members":{ + "DataPath":{ + "shape":"DataPath", + "documentation":"

The data path related to the message.

" + }, + "Message":{ + "shape":"Message", + "documentation":"

A message describing the error or warning.

" + } + }, + "documentation":"

An error or warning for the operation.

" + }, + "DashboardValidationMessages":{ + "type":"list", + "member":{"shape":"DashboardValidationMessage"} + }, + "DataPath":{"type":"string"}, "Datapoint":{ "type":"structure", "members":{ @@ -271,7 +422,7 @@ "documentation":"

The percentile statistic for the data point.

" } }, - "documentation":"

Encapsulates the statistical data that Amazon CloudWatch computes from metric data.

", + "documentation":"

Encapsulates the statistical data that CloudWatch computes from metric data.

", "xmlOrder":[ "Timestamp", "SampleCount", @@ -303,6 +454,20 @@ } } }, + "DeleteDashboardsInput":{ + "type":"structure", + "members":{ + "DashboardNames":{ + "shape":"DashboardNames", + "documentation":"

The dashboards to be deleted.

" + } + } + }, + "DeleteDashboardsOutput":{ + "type":"structure", + "members":{ + } + }, "DescribeAlarmHistoryInput":{ "type":"structure", "members":{ @@ -400,7 +565,7 @@ }, "AlarmNamePrefix":{ "shape":"AlarmNamePrefix", - "documentation":"

The alarm name prefix. You cannot specify AlarmNames if this parameter is specified.

" + "documentation":"

The alarm name prefix. If this parameter is specified, you cannot specify AlarmNames.

" }, "StateValue":{ "shape":"StateValue", @@ -535,6 +700,32 @@ "min":1 }, "FaultDescription":{"type":"string"}, + "GetDashboardInput":{ + "type":"structure", + "members":{ + "DashboardName":{ + "shape":"DashboardName", + "documentation":"

The name of the dashboard to be described.

" + } + } + }, + "GetDashboardOutput":{ + "type":"structure", + "members":{ + "DashboardArn":{ + "shape":"DashboardArn", + "documentation":"

The Amazon Resource Name (ARN) of the dashboard.

" + }, + "DashboardBody":{ + "shape":"DashboardBody", + "documentation":"

The detailed information about the dashboard, including what widgets are included and their location on the dashboard. For more information about the DashboardBody syntax, see CloudWatch-Dashboard-Body-Structure.

" + }, + "DashboardName":{ + "shape":"DashboardName", + "documentation":"

The name of the dashboard.

" + } + } + }, "GetMetricStatisticsInput":{ "type":"structure", "required":[ @@ -555,27 +746,27 @@ }, "Dimensions":{ "shape":"Dimensions", - "documentation":"

The dimensions. If the metric contains multiple dimensions, you must include a value for each dimension. CloudWatch treats each unique combination of dimensions as a separate metric. You can't retrieve statistics using combinations of dimensions that were not specially published. You must specify the same dimensions that were used when the metrics were created. For an example, see Dimension Combinations in the Amazon CloudWatch User Guide. For more information on specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.

" + "documentation":"

The dimensions. If the metric contains multiple dimensions, you must include a value for each dimension. CloudWatch treats each unique combination of dimensions as a separate metric. If a specific combination of dimensions was not published, you can't retrieve statistics for it. You must specify the same dimensions that were used when the metrics were created. For an example, see Dimension Combinations in the Amazon CloudWatch User Guide. For more information about specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide.

" }, "StartTime":{ "shape":"Timestamp", - "documentation":"

The time stamp that determines the first data point to return. Note that start times are evaluated relative to the time that CloudWatch receives the request.

The value specified is inclusive; results include data points with the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-03T23:00:00Z).

CloudWatch rounds the specified time stamp as follows:

" + "documentation":"

The time stamp that determines the first data point to return. Start times are evaluated relative to the time that CloudWatch receives the request.

The value specified is inclusive; results include data points with the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-03T23:00:00Z).

CloudWatch rounds the specified time stamp as follows:

If you set Period to 5, 10, or 30, the start time of your request is rounded down to the nearest time that corresponds to even 5-, 10-, or 30-second divisions of a minute. For example, if you make a query at (HH:mm:ss) 01:05:23 for the previous 10-second period, the start time of your request is rounded down and you receive data from 01:05:10 to 01:05:20. If you make a query at 15:07:17 for the previous 5 minutes of data, using a period of 5 seconds, you receive data timestamped between 15:02:15 and 15:07:15.

" }, "EndTime":{ "shape":"Timestamp", - "documentation":"

The time stamp that determines the last data point to return.

The value specified is exclusive; results will include data points up to the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-10T23:00:00Z).

" + "documentation":"

The time stamp that determines the last data point to return.

The value specified is exclusive; results include data points up to the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-10T23:00:00Z).

" }, "Period":{ "shape":"Period", - "documentation":"

The granularity, in seconds, of the returned data points. A period can be as short as one minute (60 seconds) and must be a multiple of 60. The default value is 60.

If the StartTime parameter specifies a time stamp that is greater than 15 days ago, you must specify the period as follows or no data points in that time range is returned:

" + "documentation":"

The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60. For high-resolution metrics that are collected at intervals of less than one minute, the period can be 1, 5, 10, 30, 60, or any multiple of 60. High-resolution metrics are those metrics stored by a PutMetricData call that includes a StorageResolution of 1 second.

If the StartTime parameter specifies a time stamp that is greater than 3 hours ago, you must specify the period as follows or no data points in that time range is returned:

" }, "Statistics":{ "shape":"Statistics", - "documentation":"

The metric statistics, other than percentile. For percentile statistics, use ExtendedStatistic.

" + "documentation":"

The metric statistics, other than percentile. For percentile statistics, use ExtendedStatistics. When calling GetMetricStatistics, you must specify either Statistics or ExtendedStatistics, but not both.

" }, "ExtendedStatistics":{ "shape":"ExtendedStatistics", - "documentation":"

The percentile statistics. Specify values between p0.0 and p100.

" + "documentation":"

The percentile statistics. Specify values between p0.0 and p100. When calling GetMetricStatistics, you must specify either Statistics or ExtendedStatistics, but not both.

" }, "Unit":{ "shape":"StandardUnit", @@ -670,7 +861,7 @@ "documentation":"

" } }, - "documentation":"

Parameters that cannot be used together were used together.

", + "documentation":"

Parameters were used together that cannot be used together.

", "error":{ "code":"InvalidParameterCombination", "httpStatusCode":400, @@ -694,6 +885,7 @@ }, "exception":true }, + "LastModified":{"type":"timestamp"}, "LimitExceededFault":{ "type":"structure", "members":{ @@ -710,6 +902,32 @@ }, "exception":true }, + "ListDashboardsInput":{ + "type":"structure", + "members":{ + "DashboardNamePrefix":{ + "shape":"DashboardNamePrefix", + "documentation":"

If you specify this parameter, only the dashboards with names starting with the specified string are listed. The maximum length is 255, and valid characters are A-Z, a-z, 0-9, \".\", \"-\", and \"_\".

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token returned by a previous call to indicate that there is more data available.

" + } + } + }, + "ListDashboardsOutput":{ + "type":"structure", + "members":{ + "DashboardEntries":{ + "shape":"DashboardEntries", + "documentation":"

The list of matching dashboards.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token that marks the start of the next batch of returned results.

" + } + } + }, "ListMetricsInput":{ "type":"structure", "members":{ @@ -753,6 +971,7 @@ "max":100, "min":1 }, + "Message":{"type":"string"}, "Metric":{ "type":"structure", "members":{ @@ -867,8 +1086,14 @@ "shape":"ComparisonOperator", "documentation":"

The arithmetic operation to use when comparing the specified statistic and threshold. The specified statistic value is used as the first operand.

" }, - "TreatMissingData":{"shape":"TreatMissingData"}, - "EvaluateLowSampleCountPercentile":{"shape":"EvaluateLowSampleCountPercentile"} + "TreatMissingData":{ + "shape":"TreatMissingData", + "documentation":"

Sets how this alarm is to handle missing data points. If this parameter is omitted, the default behavior of missing is used.

" + }, + "EvaluateLowSampleCountPercentile":{ + "shape":"EvaluateLowSampleCountPercentile", + "documentation":"

Used only for alarms based on percentiles. If ignore, the alarm state does not change during periods with too few data points to be statistically significant. If evaluate or this parameter is not used, the alarm is always evaluated and possibly changes state no matter how many data points are available.

" + } }, "documentation":"

Represents an alarm.

", "xmlOrder":[ @@ -924,7 +1149,7 @@ }, "Value":{ "shape":"DatapointValue", - "documentation":"

The value for the metric.

Although the parameter accepts numbers of type Double, Amazon CloudWatch rejects values that are either too small or too large. Values must be in the range of 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2). In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.

" + "documentation":"

The value for the metric.

Although the parameter accepts numbers of type Double, CloudWatch rejects values that are either too small or too large. Values must be in the range of 8.515920e-109 to 1.174271e+108 (Base 10) or 2e-360 to 2e360 (Base 2). In addition, special values (for example, NaN, +Infinity, -Infinity) are not supported.

" }, "StatisticValues":{ "shape":"StatisticSet", @@ -933,6 +1158,10 @@ "Unit":{ "shape":"StandardUnit", "documentation":"

The unit of the metric.

" + }, + "StorageResolution":{ + "shape":"StorageResolution", + "documentation":"

Valid values are 1 and 60. Setting this to 1 specifies this metric as a high-resolution metric, so that CloudWatch stores the metric with sub-minute resolution down to one second. Setting this to 60 specifies this metric as a regular-resolution metric, which CloudWatch stores at 1-minute resolution. Currently, high resolution is available only for custom metrics. For more information about high-resolution metrics, see High-Resolution Metrics in the Amazon CloudWatch User Guide.

This field is optional, if you do not specify it the default of 60 is used.

" } }, "documentation":"

Encapsulates the information sent to either create a metric or add new values to be aggregated into an existing metric.

" @@ -976,7 +1205,29 @@ }, "Period":{ "type":"integer", - "min":60 + "min":1 + }, + "PutDashboardInput":{ + "type":"structure", + "members":{ + "DashboardName":{ + "shape":"DashboardName", + "documentation":"

The name of the dashboard. If a dashboard with this name already exists, this call modifies that dashboard, replacing its current contents. Otherwise, a new dashboard is created. The maximum length is 255, and valid characters are A-Z, a-z, 0-9, \"-\", and \"_\".

" + }, + "DashboardBody":{ + "shape":"DashboardBody", + "documentation":"

The detailed information about the dashboard in JSON format, including the widgets to include and their location on the dashboard.

For more information about the syntax, see CloudWatch-Dashboard-Body-Structure.

" + } + } + }, + "PutDashboardOutput":{ + "type":"structure", + "members":{ + "DashboardValidationMessages":{ + "shape":"DashboardValidationMessages", + "documentation":"

If the input for PutDashboard was correct and the dashboard was successfully created or modified, this result is empty.

If this result includes only warning messages, then the input was valid enough for the dashboard to be created or modified, but some elements of the dashboard may not render.

If this result includes error messages, the input was not valid and the operation failed.

" + } + } }, "PutMetricAlarmInput":{ "type":"structure", @@ -1036,15 +1287,15 @@ }, "Period":{ "shape":"Period", - "documentation":"

The period, in seconds, over which the specified statistic is applied.

" + "documentation":"

The period, in seconds, over which the specified statistic is applied. Valid values are 10, 30, and any multiple of 60.

Be sure to specify 10 or 30 only for metrics that are stored by a PutMetricData call with a StorageResolution of 1. If you specify a Period of 10 or 30 for a metric that does not have sub-minute resolution, the alarm still attempts to gather data at the period rate that you specify. In this case, it does not receive data for the attempts that do not correspond to a one-minute data resolution, and the alarm may often lapse into INSUFFICENT_DATA status. Specifying 10 or 30 also sets this alarm as a high-resolution alarm, which has a higher charge than other alarms. For more information about pricing, see Amazon CloudWatch Pricing.

An alarm's total current evaluation period can be no longer than one day, so Period multiplied by EvaluationPeriods cannot be more than 86,400 seconds.

" }, "Unit":{ "shape":"StandardUnit", - "documentation":"

The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.

If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the Amazon CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.

" + "documentation":"

The unit of measure for the statistic. For example, the units for the Amazon EC2 NetworkIn metric are Bytes because NetworkIn tracks the number of bytes that an instance receives on all network interfaces. You can also specify a unit when you create a custom metric. Units help provide conceptual meaning to your data. Metric data points that specify a unit of measure, such as Percent, are aggregated separately.

If you specify a unit, you must use a unit that is appropriate for the metric. Otherwise, the CloudWatch alarm can get stuck in the INSUFFICIENT DATA state.

" }, "EvaluationPeriods":{ "shape":"EvaluationPeriods", - "documentation":"

The number of periods over which data is compared to the specified threshold.

" + "documentation":"

The number of periods over which data is compared to the specified threshold. An alarm's total current evaluation period can be no longer than one day, so this number multiplied by Period cannot be more than 86,400 seconds.

" }, "Threshold":{ "shape":"Threshold", @@ -1060,7 +1311,7 @@ }, "EvaluateLowSampleCountPercentile":{ "shape":"EvaluateLowSampleCountPercentile", - "documentation":"

Used only for alarms based on percentiles. If you specify ignore, the alarm state will not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm will always be evaluated and possibly change state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples.

Valid Values: evaluate | ignore

" + "documentation":"

Used only for alarms based on percentiles. If you specify ignore, the alarm state does not change during periods with too few data points to be statistically significant. If you specify evaluate or omit this parameter, the alarm is always evaluated and possibly changes state no matter how many data points are available. For more information, see Percentile-Based CloudWatch Alarms and Low Data Samples.

Valid Values: evaluate | ignore

" } } }, @@ -1133,6 +1384,7 @@ } } }, + "Size":{"type":"long"}, "StandardUnit":{ "type":"string", "enum":[ @@ -1227,6 +1479,10 @@ "max":5, "min":1 }, + "StorageResolution":{ + "type":"integer", + "min":1 + }, "Threshold":{"type":"double"}, "Timestamp":{"type":"timestamp"}, "TreatMissingData":{ @@ -1235,5 +1491,5 @@ "min":1 } }, - "documentation":"

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications.

CloudWatch alarms send notifications or automatically make changes to the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon Elastic Compute Cloud (Amazon EC2) instances and then use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money.

In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

" + "documentation":"

Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real time. You can use CloudWatch to collect and track metrics, which are the variables you want to measure for your resources and applications.

CloudWatch alarms send notifications or automatically change the resources you are monitoring based on rules that you define. For example, you can monitor the CPU usage and disk reads and writes of your Amazon EC2 instances. Then, use this data to determine whether you should launch additional instances to handle increased load. You can also use this data to stop under-used instances to save money.

In addition to monitoring the built-in metrics that come with AWS, you can monitor your own custom metrics. With CloudWatch, you gain system-wide visibility into resource utilization, application performance, and operational health.

" } diff --git a/services/codebuild/src/main/resources/codegen-resources/examples-1.json b/services/codebuild/src/main/resources/codegen-resources/examples-1.json index 0ea7e3b0bbe9..a5fb660e25d8 100644 --- a/services/codebuild/src/main/resources/codegen-resources/examples-1.json +++ b/services/codebuild/src/main/resources/codegen-resources/examples-1.json @@ -1,5 +1,281 @@ { "version": "1.0", "examples": { + "BatchGetBuilds": [ + { + "input": { + "ids": [ + "codebuild-demo-project:9b0ac37f-d19e-4254-9079-f47e9a389eEX", + "codebuild-demo-project:b79a46f7-1473-4636-a23f-da9c45c208EX" + ] + }, + "output": { + "builds": [ + { + "arn": "arn:aws:codebuild:us-east-1:123456789012:build/codebuild-demo-project:9b0ac37f-d19e-4254-9079-f47e9a389eEX", + "artifacts": { + "location": "arn:aws:s3:::codebuild-123456789012-output-bucket/codebuild-demo-project" + }, + "buildComplete": true, + "buildStatus": "SUCCEEDED", + "currentPhase": "COMPLETED", + "endTime": 1479832474.764, + "environment": { + "type": "LINUX_CONTAINER", + "computeType": "BUILD_GENERAL1_SMALL", + "environmentVariables": [ + + ], + "image": "aws/codebuild/java:openjdk-8", + "privilegedMode": false + }, + "id": "codebuild-demo-project:9b0ac37f-d19e-4254-9079-f47e9a389eEX", + "initiator": "MyDemoUser", + "logs": { + "deepLink": "https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEvent:group=/aws/codebuild/codebuild-demo-project;stream=9b0ac37f-d19e-4254-9079-f47e9a389eEX", + "groupName": "/aws/codebuild/codebuild-demo-project", + "streamName": "9b0ac37f-d19e-4254-9079-f47e9a389eEX" + }, + "phases": [ + { + "durationInSeconds": 0, + "endTime": 1479832342.23, + "phaseStatus": "SUCCEEDED", + "phaseType": "SUBMITTED", + "startTime": 1479832341.854 + }, + { + "contexts": [ + + ], + "durationInSeconds": 72, + "endTime": 1479832415.064, + "phaseStatus": "SUCCEEDED", + "phaseType": "PROVISIONING", + "startTime": 1479832342.23 + }, + { + "contexts": [ + + ], + "durationInSeconds": 46, + "endTime": 1479832461.261, + "phaseStatus": "SUCCEEDED", + "phaseType": "DOWNLOAD_SOURCE", + "startTime": 1479832415.064 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479832461.354, + "phaseStatus": "SUCCEEDED", + "phaseType": "INSTALL", + "startTime": 1479832461.261 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479832461.448, + "phaseStatus": "SUCCEEDED", + "phaseType": "PRE_BUILD", + "startTime": 1479832461.354 + }, + { + "contexts": [ + + ], + "durationInSeconds": 9, + "endTime": 1479832471.115, + "phaseStatus": "SUCCEEDED", + "phaseType": "BUILD", + "startTime": 1479832461.448 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479832471.224, + "phaseStatus": "SUCCEEDED", + "phaseType": "POST_BUILD", + "startTime": 1479832471.115 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479832471.791, + "phaseStatus": "SUCCEEDED", + "phaseType": "UPLOAD_ARTIFACTS", + "startTime": 1479832471.224 + }, + { + "contexts": [ + + ], + "durationInSeconds": 2, + "endTime": 1479832474.764, + "phaseStatus": "SUCCEEDED", + "phaseType": "FINALIZING", + "startTime": 1479832471.791 + }, + { + "phaseType": "COMPLETED", + "startTime": 1479832474.764 + } + ], + "projectName": "codebuild-demo-project", + "source": { + "type": "S3", + "buildspec": "", + "location": "arn:aws:s3:::codebuild-123456789012-input-bucket/MessageUtil.zip" + }, + "startTime": 1479832341.854, + "timeoutInMinutes": 60 + }, + { + "arn": "arn:aws:codebuild:us-east-1:123456789012:build/codebuild-demo-project:b79a46f7-1473-4636-a23f-da9c45c208EX", + "artifacts": { + "location": "arn:aws:s3:::codebuild-123456789012-output-bucket/codebuild-demo-project" + }, + "buildComplete": true, + "buildStatus": "SUCCEEDED", + "currentPhase": "COMPLETED", + "endTime": 1479401214.239, + "environment": { + "type": "LINUX_CONTAINER", + "computeType": "BUILD_GENERAL1_SMALL", + "environmentVariables": [ + + ], + "image": "aws/codebuild/java:openjdk-8", + "privilegedMode": false + }, + "id": "codebuild-demo-project:b79a46f7-1473-4636-a23f-da9c45c208EX", + "initiator": "MyDemoUser", + "logs": { + "deepLink": "https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEvent:group=/aws/codebuild/codebuild-demo-project;stream=b79a46f7-1473-4636-a23f-da9c45c208EX", + "groupName": "/aws/codebuild/codebuild-demo-project", + "streamName": "b79a46f7-1473-4636-a23f-da9c45c208EX" + }, + "phases": [ + { + "durationInSeconds": 0, + "endTime": 1479401082.342, + "phaseStatus": "SUCCEEDED", + "phaseType": "SUBMITTED", + "startTime": 1479401081.869 + }, + { + "contexts": [ + + ], + "durationInSeconds": 71, + "endTime": 1479401154.129, + "phaseStatus": "SUCCEEDED", + "phaseType": "PROVISIONING", + "startTime": 1479401082.342 + }, + { + "contexts": [ + + ], + "durationInSeconds": 45, + "endTime": 1479401199.136, + "phaseStatus": "SUCCEEDED", + "phaseType": "DOWNLOAD_SOURCE", + "startTime": 1479401154.129 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479401199.236, + "phaseStatus": "SUCCEEDED", + "phaseType": "INSTALL", + "startTime": 1479401199.136 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479401199.345, + "phaseStatus": "SUCCEEDED", + "phaseType": "PRE_BUILD", + "startTime": 1479401199.236 + }, + { + "contexts": [ + + ], + "durationInSeconds": 9, + "endTime": 1479401208.68, + "phaseStatus": "SUCCEEDED", + "phaseType": "BUILD", + "startTime": 1479401199.345 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479401208.783, + "phaseStatus": "SUCCEEDED", + "phaseType": "POST_BUILD", + "startTime": 1479401208.68 + }, + { + "contexts": [ + + ], + "durationInSeconds": 0, + "endTime": 1479401209.463, + "phaseStatus": "SUCCEEDED", + "phaseType": "UPLOAD_ARTIFACTS", + "startTime": 1479401208.783 + }, + { + "contexts": [ + + ], + "durationInSeconds": 4, + "endTime": 1479401214.239, + "phaseStatus": "SUCCEEDED", + "phaseType": "FINALIZING", + "startTime": 1479401209.463 + }, + { + "phaseType": "COMPLETED", + "startTime": 1479401214.239 + } + ], + "projectName": "codebuild-demo-project", + "source": { + "type": "S3", + "location": "arn:aws:s3:::codebuild-123456789012-input-bucket/MessageUtil.zip" + }, + "startTime": 1479401081.869, + "timeoutInMinutes": 60 + } + ] + }, + "comments": { + "input": { + }, + "output": { + } + }, + "description": "The following example gets information about builds with the specified build IDs.", + "id": "to-get-information-about-builds-1501187184588", + "title": "To get information about builds" + } + ] } } diff --git a/services/codebuild/src/main/resources/codegen-resources/service-2.json b/services/codebuild/src/main/resources/codegen-resources/service-2.json index 573e28404da9..5276f463736e 100644 --- a/services/codebuild/src/main/resources/codegen-resources/service-2.json +++ b/services/codebuild/src/main/resources/codegen-resources/service-2.json @@ -11,6 +11,19 @@ "uid":"codebuild-2016-10-06" }, "operations":{ + "BatchDeleteBuilds":{ + "name":"BatchDeleteBuilds", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"BatchDeleteBuildsInput"}, + "output":{"shape":"BatchDeleteBuildsOutput"}, + "errors":[ + {"shape":"InvalidInputException"} + ], + "documentation":"

Deletes one or more builds.

" + }, "BatchGetBuilds":{ "name":"BatchGetBuilds", "http":{ @@ -52,6 +65,22 @@ ], "documentation":"

Creates a build project.

" }, + "CreateWebhook":{ + "name":"CreateWebhook", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateWebhookInput"}, + "output":{"shape":"CreateWebhookOutput"}, + "errors":[ + {"shape":"InvalidInputException"}, + {"shape":"OAuthProviderException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

For an existing AWS CodeBuild build project that has its source code stored in a GitHub repository, enables AWS CodeBuild to begin automatically rebuilding the source code every time a code change is pushed to the repository.

If you enable webhooks for an AWS CodeBuild project, and the project is used as a build step in AWS CodePipeline, then two identical builds will be created for each commit. One build is triggered through webhooks, and one through AWS CodePipeline. Because billing is on a per-build basis, you will be billed for both builds. Therefore, if you are using AWS CodePipeline, we recommend that you disable webhooks in CodeBuild. In the AWS CodeBuild console, clear the Webhook box. For more information, see step 9 in Change a Build Project’s Settings.

" + }, "DeleteProject":{ "name":"DeleteProject", "http":{ @@ -65,6 +94,21 @@ ], "documentation":"

Deletes a build project.

" }, + "DeleteWebhook":{ + "name":"DeleteWebhook", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteWebhookInput"}, + "output":{"shape":"DeleteWebhookOutput"}, + "errors":[ + {"shape":"InvalidInputException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"OAuthProviderException"} + ], + "documentation":"

For an existing AWS CodeBuild build project that has its source code stored in a GitHub repository, stops AWS CodeBuild from automatically rebuilding the source code every time a code change is pushed to the repository.

" + }, "ListBuilds":{ "name":"ListBuilds", "http":{ @@ -189,6 +233,29 @@ "NO_ARTIFACTS" ] }, + "BatchDeleteBuildsInput":{ + "type":"structure", + "required":["ids"], + "members":{ + "ids":{ + "shape":"BuildIds", + "documentation":"

The IDs of the builds to delete.

" + } + } + }, + "BatchDeleteBuildsOutput":{ + "type":"structure", + "members":{ + "buildsDeleted":{ + "shape":"BuildIds", + "documentation":"

The IDs of the builds that were successfully deleted.

" + }, + "buildsNotDeleted":{ + "shape":"BuildsNotDeleted", + "documentation":"

Information about any builds that could not be successfully deleted.

" + } + } + }, "BatchGetBuildsInput":{ "type":"structure", "required":["ids"], @@ -330,6 +397,20 @@ "max":100, "min":1 }, + "BuildNotDeleted":{ + "type":"structure", + "members":{ + "id":{ + "shape":"NonEmptyString", + "documentation":"

The ID of the build that could not be successfully deleted.

" + }, + "statusCode":{ + "shape":"String", + "documentation":"

Additional information about the build that could not be successfully deleted.

" + } + }, + "documentation":"

Information about a build that could not be successfully deleted.

" + }, "BuildPhase":{ "type":"structure", "members":{ @@ -383,6 +464,10 @@ "type":"list", "member":{"shape":"Build"} }, + "BuildsNotDeleted":{ + "type":"list", + "member":{"shape":"BuildNotDeleted"} + }, "ComputeType":{ "type":"string", "enum":[ @@ -447,6 +532,25 @@ } } }, + "CreateWebhookInput":{ + "type":"structure", + "required":["projectName"], + "members":{ + "projectName":{ + "shape":"ProjectName", + "documentation":"

The name of the build project.

" + } + } + }, + "CreateWebhookOutput":{ + "type":"structure", + "members":{ + "webhook":{ + "shape":"Webhook", + "documentation":"

Information about a webhook in GitHub that connects repository events to a build project in AWS CodeBuild.

" + } + } + }, "DeleteProjectInput":{ "type":"structure", "required":["name"], @@ -462,6 +566,21 @@ "members":{ } }, + "DeleteWebhookInput":{ + "type":"structure", + "required":["projectName"], + "members":{ + "projectName":{ + "shape":"ProjectName", + "documentation":"

The name of the build project.

" + } + } + }, + "DeleteWebhookOutput":{ + "type":"structure", + "members":{ + } + }, "EnvironmentImage":{ "type":"structure", "members":{ @@ -534,10 +653,21 @@ "value":{ "shape":"String", "documentation":"

The value of the environment variable.

We strongly discourage using environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. Environment variables can be displayed in plain text using tools such as the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI).

" + }, + "type":{ + "shape":"EnvironmentVariableType", + "documentation":"

The type of environment variable. Valid values include:

" } }, "documentation":"

Information about an environment variable for a build project or a build.

" }, + "EnvironmentVariableType":{ + "type":"string", + "enum":[ + "PLAINTEXT", + "PARAMETER_STORE" + ] + }, "EnvironmentVariables":{ "type":"list", "member":{"shape":"EnvironmentVariable"} @@ -565,6 +695,7 @@ "GOLANG", "DOCKER", "ANDROID", + "DOTNET", "BASE" ] }, @@ -691,6 +822,13 @@ "type":"string", "min":1 }, + "OAuthProviderException":{ + "type":"structure", + "members":{ + }, + "documentation":"

There was a problem with the underlying OAuth provider.

", + "exception":true + }, "PhaseContext":{ "type":"structure", "members":{ @@ -767,6 +905,10 @@ "lastModified":{ "shape":"Timestamp", "documentation":"

When the build project's settings were last modified, expressed in Unix time format.

" + }, + "webhook":{ + "shape":"Webhook", + "documentation":"

Information about a webhook in GitHub that connects repository events to a build project in AWS CodeBuild.

" } }, "documentation":"

Information about a build project.

" @@ -833,7 +975,7 @@ }, "privilegedMode":{ "shape":"WrapperBoolean", - "documentation":"

If set to true, enables running the Docker daemon inside a Docker container; otherwise, false or not specified (the default). This value must be set to true only if this build project will be used to build Docker images, and the specified build environment image is not one provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon will fail. Note that you must also start the Docker daemon so that your builds can interact with it as needed. One way to do this is to initialize the Docker daemon in the install phase of your build spec by running the following build commands. (Do not run the following build commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)

- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=vfs& - timeout -t 15 sh -c \"until docker info; do echo .; sleep 1; done\"

" + "documentation":"

If set to true, enables running the Docker daemon inside a Docker container; otherwise, false or not specified (the default). This value must be set to true only if this build project will be used to build Docker images, and the specified build environment image is not one provided by AWS CodeBuild with Docker support. Otherwise, all associated builds that attempt to interact with the Docker daemon will fail. Note that you must also start the Docker daemon so that your builds can interact with it as needed. One way to do this is to initialize the Docker daemon in the install phase of your build spec by running the following build commands. (Do not run the following build commands if the specified build environment image is provided by AWS CodeBuild with Docker support.)

- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2375 --storage-driver=overlay& - timeout -t 15 sh -c \"until docker info; do echo .; sleep 1; done\"

" } }, "documentation":"

Information about the build environment of the build project.

" @@ -864,11 +1006,11 @@ "members":{ "type":{ "shape":"SourceType", - "documentation":"

The type of repository that contains the source code to be built. Valid values include:

" + "documentation":"

The type of repository that contains the source code to be built. Valid values include:

" }, "location":{ "shape":"String", - "documentation":"

Information about the location of the source code to be built. Valid values include:

" + "documentation":"

Information about the location of the source code to be built. Valid values include:

" }, "buildspec":{ "shape":"String", @@ -876,7 +1018,7 @@ }, "auth":{ "shape":"SourceAuth", - "documentation":"

Information about the authorization settings for AWS CodeBuild to access the source code to be built.

This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly (unless the build project's source type value is GITHUB).

" + "documentation":"

Information about the authorization settings for AWS CodeBuild to access the source code to be built.

This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly (unless the build project's source type value is BITBUCKET or GITHUB).

" } }, "documentation":"

Information about the build input source code for the build project.

" @@ -919,7 +1061,7 @@ "documentation":"

The resource value that applies to the specified authorization type.

" } }, - "documentation":"

Information about the authorization settings for AWS CodeBuild to access the source code to be built.

This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly (unless the build project's source type value is GITHUB).

" + "documentation":"

Information about the authorization settings for AWS CodeBuild to access the source code to be built.

This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly (unless the build project's source type value is BITBUCKET or GITHUB).

" }, "SourceAuthType":{ "type":"string", @@ -931,7 +1073,8 @@ "CODECOMMIT", "CODEPIPELINE", "GITHUB", - "S3" + "S3", + "BITBUCKET" ] }, "StartBuildInput":{ @@ -944,7 +1087,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"

A version of the build input to be built, for this build only. If not specified, the latest version will be used. If specified, must be one of:

" + "documentation":"

A version of the build input to be built, for this build only. If not specified, the latest version will be used. If specified, must be one of:

" }, "artifactsOverride":{ "shape":"ProjectArtifacts", @@ -1087,9 +1230,19 @@ "min":1, "pattern":"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=@+\\\\-]*)$" }, + "Webhook":{ + "type":"structure", + "members":{ + "url":{ + "shape":"NonEmptyString", + "documentation":"

The URL to the webhook.

" + } + }, + "documentation":"

Information about a webhook in GitHub that connects repository events to a build project in AWS CodeBuild.

" + }, "WrapperBoolean":{"type":"boolean"}, "WrapperInt":{"type":"integer"}, "WrapperLong":{"type":"long"} }, - "documentation":"AWS CodeBuild

AWS CodeBuild is a fully managed build service in the cloud. AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apach Maven, Gradle, and more. You can also fully customize build environments in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically to meet peak build requests, and you pay only for the build time you consume. For more information about AWS CodeBuild, see the AWS CodeBuild User Guide.

AWS CodeBuild supports these operations:

" + "documentation":"AWS CodeBuild

AWS CodeBuild is a fully managed build service in the cloud. AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically to meet peak build requests, and you pay only for the build time you consume. For more information about AWS CodeBuild, see the AWS CodeBuild User Guide.

AWS CodeBuild supports these operations:

" } diff --git a/services/codecommit/src/main/resources/codegen-resources/service-2.json b/services/codecommit/src/main/resources/codegen-resources/service-2.json index 7de5341ea9e6..3093a2cdab25 100644 --- a/services/codecommit/src/main/resources/codegen-resources/service-2.json +++ b/services/codecommit/src/main/resources/codegen-resources/service-2.json @@ -57,6 +57,43 @@ ], "documentation":"

Creates a new branch in a repository and points the branch to a commit.

Calling the create branch operation does not set a repository's default branch. To do this, call the update default branch operation.

" }, + "CreatePullRequest":{ + "name":"CreatePullRequest", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreatePullRequestInput"}, + "output":{"shape":"CreatePullRequestOutput"}, + "errors":[ + {"shape":"RepositoryNameRequiredException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"}, + {"shape":"ClientRequestTokenRequiredException"}, + {"shape":"InvalidClientRequestTokenException"}, + {"shape":"IdempotencyParameterMismatchException"}, + {"shape":"ReferenceNameRequiredException"}, + {"shape":"InvalidReferenceNameException"}, + {"shape":"ReferenceDoesNotExistException"}, + {"shape":"ReferenceTypeNotSupportedException"}, + {"shape":"TitleRequiredException"}, + {"shape":"InvalidTitleException"}, + {"shape":"InvalidDescriptionException"}, + {"shape":"TargetsRequiredException"}, + {"shape":"InvalidTargetsException"}, + {"shape":"TargetRequiredException"}, + {"shape":"InvalidTargetException"}, + {"shape":"MultipleRepositoriesInPullRequestException"}, + {"shape":"MaximumOpenPullRequestsExceededException"}, + {"shape":"SourceAndDestinationAreSameException"} + ], + "documentation":"

Creates a pull request in the specified repository.

" + }, "CreateRepository":{ "name":"CreateRepository", "http":{ @@ -79,6 +116,45 @@ ], "documentation":"

Creates a new, empty repository.

" }, + "DeleteBranch":{ + "name":"DeleteBranch", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteBranchInput"}, + "output":{"shape":"DeleteBranchOutput"}, + "errors":[ + {"shape":"RepositoryNameRequiredException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"BranchNameRequiredException"}, + {"shape":"InvalidBranchNameException"}, + {"shape":"DefaultBranchCannotBeDeletedException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Deletes a branch from a repository, unless that branch is the default branch for the repository.

" + }, + "DeleteCommentContent":{ + "name":"DeleteCommentContent", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteCommentContentInput"}, + "output":{"shape":"DeleteCommentContentOutput"}, + "errors":[ + {"shape":"CommentDoesNotExistException"}, + {"shape":"CommentIdRequiredException"}, + {"shape":"InvalidCommentIdException"}, + {"shape":"CommentDeletedException"} + ], + "documentation":"

Deletes the content of a comment made on a change, file, or commit in a repository.

" + }, "DeleteRepository":{ "name":"DeleteRepository", "http":{ @@ -96,7 +172,32 @@ {"shape":"EncryptionKeyNotFoundException"}, {"shape":"EncryptionKeyUnavailableException"} ], - "documentation":"

Deletes a repository. If a specified repository was already deleted, a null repository ID will be returned.

Deleting a repository also deletes all associated objects and metadata. After a repository is deleted, all future push calls to the deleted repository will fail.

" + "documentation":"

Deletes a repository. If a specified repository was already deleted, a null repository ID will be returned.

Deleting a repository also deletes all associated objects and metadata. After a repository is deleted, all future push calls to the deleted repository will fail.

" + }, + "DescribePullRequestEvents":{ + "name":"DescribePullRequestEvents", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribePullRequestEventsInput"}, + "output":{"shape":"DescribePullRequestEventsOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"InvalidPullRequestEventTypeException"}, + {"shape":"InvalidActorArnException"}, + {"shape":"ActorDoesNotExistException"}, + {"shape":"InvalidMaxResultsException"}, + {"shape":"InvalidContinuationTokenException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Returns information about one or more pull request events.

" }, "GetBlob":{ "name":"GetBlob", @@ -145,6 +246,76 @@ ], "documentation":"

Returns information about a repository branch, including its name and the last commit ID.

" }, + "GetComment":{ + "name":"GetComment", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetCommentInput"}, + "output":{"shape":"GetCommentOutput"}, + "errors":[ + {"shape":"CommentDoesNotExistException"}, + {"shape":"CommentIdRequiredException"}, + {"shape":"InvalidCommentIdException"}, + {"shape":"CommentDeletedException"} + ], + "documentation":"

Returns the content of a comment made on a change, file, or commit in a repository.

" + }, + "GetCommentsForComparedCommit":{ + "name":"GetCommentsForComparedCommit", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetCommentsForComparedCommitInput"}, + "output":{"shape":"GetCommentsForComparedCommitOutput"}, + "errors":[ + {"shape":"RepositoryNameRequiredException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"CommitIdRequiredException"}, + {"shape":"InvalidCommitIdException"}, + {"shape":"CommitDoesNotExistException"}, + {"shape":"InvalidMaxResultsException"}, + {"shape":"InvalidContinuationTokenException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Returns information about comments made on the comparison between two commits.

" + }, + "GetCommentsForPullRequest":{ + "name":"GetCommentsForPullRequest", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetCommentsForPullRequestInput"}, + "output":{"shape":"GetCommentsForPullRequestOutput"}, + "errors":[ + {"shape":"PullRequestIdRequiredException"}, + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"RepositoryNameRequiredException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"CommitIdRequiredException"}, + {"shape":"InvalidCommitIdException"}, + {"shape":"CommitDoesNotExistException"}, + {"shape":"InvalidMaxResultsException"}, + {"shape":"InvalidContinuationTokenException"}, + {"shape":"RepositoryNotAssociatedWithPullRequestException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Returns comments made on a pull request.

" + }, "GetCommit":{ "name":"GetCommit", "http":{ @@ -196,6 +367,54 @@ ], "documentation":"

Returns information about the differences in a valid commit specifier (such as a branch, tag, HEAD, commit ID or other fully qualified reference). Results can be limited to a specified path.

" }, + "GetMergeConflicts":{ + "name":"GetMergeConflicts", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetMergeConflictsInput"}, + "output":{"shape":"GetMergeConflictsOutput"}, + "errors":[ + {"shape":"RepositoryNameRequiredException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"MergeOptionRequiredException"}, + {"shape":"InvalidMergeOptionException"}, + {"shape":"InvalidDestinationCommitSpecifierException"}, + {"shape":"InvalidSourceCommitSpecifierException"}, + {"shape":"CommitRequiredException"}, + {"shape":"CommitDoesNotExistException"}, + {"shape":"InvalidCommitException"}, + {"shape":"TipsDivergenceExceededException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Returns information about merge conflicts between the before and after commit IDs for a pull request in a repository.

" + }, + "GetPullRequest":{ + "name":"GetPullRequest", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetPullRequestInput"}, + "output":{"shape":"GetPullRequestOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Gets information about a pull request in a specified repository.

" + }, "GetRepository":{ "name":"GetRepository", "http":{ @@ -257,6 +476,31 @@ ], "documentation":"

Gets information about one or more branches in a repository.

" }, + "ListPullRequests":{ + "name":"ListPullRequests", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListPullRequestsInput"}, + "output":{"shape":"ListPullRequestsOutput"}, + "errors":[ + {"shape":"InvalidPullRequestStatusException"}, + {"shape":"InvalidAuthorArnException"}, + {"shape":"AuthorDoesNotExistException"}, + {"shape":"RepositoryNameRequiredException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidMaxResultsException"}, + {"shape":"InvalidContinuationTokenException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Returns a list of pull requests for a specified repository. The return list can be refined by pull request status or pull request author ARN.

" + }, "ListRepositories":{ "name":"ListRepositories", "http":{ @@ -272,6 +516,132 @@ ], "documentation":"

Gets information about one or more repositories.

" }, + "MergePullRequestByFastForward":{ + "name":"MergePullRequestByFastForward", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"MergePullRequestByFastForwardInput"}, + "output":{"shape":"MergePullRequestByFastForwardOutput"}, + "errors":[ + {"shape":"ManualMergeRequiredException"}, + {"shape":"PullRequestAlreadyClosedException"}, + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"TipOfSourceReferenceIsDifferentException"}, + {"shape":"ReferenceDoesNotExistException"}, + {"shape":"InvalidCommitIdException"}, + {"shape":"RepositoryNameRequiredException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Closes a pull request and attempts to merge the source commit of a pull request into the specified destination branch for that pull request at the specified commit using the fast-forward merge option.

" + }, + "PostCommentForComparedCommit":{ + "name":"PostCommentForComparedCommit", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PostCommentForComparedCommitInput"}, + "output":{"shape":"PostCommentForComparedCommitOutput"}, + "errors":[ + {"shape":"RepositoryNameRequiredException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"ClientRequestTokenRequiredException"}, + {"shape":"InvalidClientRequestTokenException"}, + {"shape":"IdempotencyParameterMismatchException"}, + {"shape":"CommentContentRequiredException"}, + {"shape":"CommentContentSizeLimitExceededException"}, + {"shape":"InvalidFileLocationException"}, + {"shape":"InvalidRelativeFileVersionEnumException"}, + {"shape":"PathRequiredException"}, + {"shape":"InvalidFilePositionException"}, + {"shape":"CommitIdRequiredException"}, + {"shape":"InvalidCommitIdException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"}, + {"shape":"BeforeCommitIdAndAfterCommitIdAreSameException"}, + {"shape":"CommitDoesNotExistException"}, + {"shape":"InvalidPathException"}, + {"shape":"PathDoesNotExistException"} + ], + "documentation":"

Posts a comment on the comparison between two commits.

", + "idempotent":true + }, + "PostCommentForPullRequest":{ + "name":"PostCommentForPullRequest", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PostCommentForPullRequestInput"}, + "output":{"shape":"PostCommentForPullRequestOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"RepositoryNotAssociatedWithPullRequestException"}, + {"shape":"RepositoryNameRequiredException"}, + {"shape":"RepositoryDoesNotExistException"}, + {"shape":"InvalidRepositoryNameException"}, + {"shape":"ClientRequestTokenRequiredException"}, + {"shape":"InvalidClientRequestTokenException"}, + {"shape":"IdempotencyParameterMismatchException"}, + {"shape":"CommentContentRequiredException"}, + {"shape":"CommentContentSizeLimitExceededException"}, + {"shape":"InvalidFileLocationException"}, + {"shape":"InvalidRelativeFileVersionEnumException"}, + {"shape":"PathRequiredException"}, + {"shape":"InvalidFilePositionException"}, + {"shape":"CommitIdRequiredException"}, + {"shape":"InvalidCommitIdException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"}, + {"shape":"CommitDoesNotExistException"}, + {"shape":"InvalidPathException"}, + {"shape":"PathDoesNotExistException"}, + {"shape":"PathRequiredException"}, + {"shape":"BeforeCommitIdAndAfterCommitIdAreSameException"} + ], + "documentation":"

Posts a comment on a pull request.

", + "idempotent":true + }, + "PostCommentReply":{ + "name":"PostCommentReply", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PostCommentReplyInput"}, + "output":{"shape":"PostCommentReplyOutput"}, + "errors":[ + {"shape":"ClientRequestTokenRequiredException"}, + {"shape":"InvalidClientRequestTokenException"}, + {"shape":"IdempotencyParameterMismatchException"}, + {"shape":"CommentContentRequiredException"}, + {"shape":"CommentContentSizeLimitExceededException"}, + {"shape":"CommentDoesNotExistException"}, + {"shape":"CommentIdRequiredException"}, + {"shape":"InvalidCommentIdException"} + ], + "documentation":"

Posts a comment in reply to an existing comment on a comparison between commits or a pull request.

", + "idempotent":true + }, "PutRepositoryTriggers":{ "name":"PutRepositoryTriggers", "http":{ @@ -338,6 +708,25 @@ ], "documentation":"

Tests the functionality of repository triggers by sending information to the trigger target. If real data is available in the repository, the test will send data from the last commit. If no data is available, sample data will be generated.

" }, + "UpdateComment":{ + "name":"UpdateComment", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateCommentInput"}, + "output":{"shape":"UpdateCommentOutput"}, + "errors":[ + {"shape":"CommentContentRequiredException"}, + {"shape":"CommentContentSizeLimitExceededException"}, + {"shape":"CommentDoesNotExistException"}, + {"shape":"CommentIdRequiredException"}, + {"shape":"InvalidCommentIdException"}, + {"shape":"CommentNotCreatedByCallerException"}, + {"shape":"CommentDeletedException"} + ], + "documentation":"

Replaces the contents of a comment.

" + }, "UpdateDefaultBranch":{ "name":"UpdateDefaultBranch", "http":{ @@ -360,6 +749,64 @@ ], "documentation":"

Sets or changes the default branch name for the specified repository.

If you use this operation to change the default branch name to the current default branch name, a success message is returned even though the default branch did not change.

" }, + "UpdatePullRequestDescription":{ + "name":"UpdatePullRequestDescription", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdatePullRequestDescriptionInput"}, + "output":{"shape":"UpdatePullRequestDescriptionOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"InvalidDescriptionException"}, + {"shape":"PullRequestAlreadyClosedException"} + ], + "documentation":"

Replaces the contents of the description of a pull request.

" + }, + "UpdatePullRequestStatus":{ + "name":"UpdatePullRequestStatus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdatePullRequestStatusInput"}, + "output":{"shape":"UpdatePullRequestStatusOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"InvalidPullRequestStatusUpdateException"}, + {"shape":"InvalidPullRequestStatusException"}, + {"shape":"PullRequestStatusRequiredException"}, + {"shape":"EncryptionIntegrityChecksFailedException"}, + {"shape":"EncryptionKeyAccessDeniedException"}, + {"shape":"EncryptionKeyDisabledException"}, + {"shape":"EncryptionKeyNotFoundException"}, + {"shape":"EncryptionKeyUnavailableException"} + ], + "documentation":"

Updates the status of a pull request.

" + }, + "UpdatePullRequestTitle":{ + "name":"UpdatePullRequestTitle", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdatePullRequestTitleInput"}, + "output":{"shape":"UpdatePullRequestTitleOutput"}, + "errors":[ + {"shape":"PullRequestDoesNotExistException"}, + {"shape":"InvalidPullRequestIdException"}, + {"shape":"PullRequestIdRequiredException"}, + {"shape":"TitleRequiredException"}, + {"shape":"InvalidTitleException"}, + {"shape":"PullRequestAlreadyClosedException"} + ], + "documentation":"

Replaces the title of a pull request.

" + }, "UpdateRepositoryDescription":{ "name":"UpdateRepositoryDescription", "http":{ @@ -398,8 +845,22 @@ }, "shapes":{ "AccountId":{"type":"string"}, + "ActorDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified Amazon Resource Name (ARN) does not exist in the AWS account.

", + "exception":true + }, "AdditionalData":{"type":"string"}, "Arn":{"type":"string"}, + "AuthorDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified Amazon Resource Name (ARN) does not exist in the AWS account.

", + "exception":true + }, "BatchGetRepositoriesInput":{ "type":"structure", "required":["repositoryNames"], @@ -425,14 +886,21 @@ }, "documentation":"

Represents the output of a batch get repositories operation.

" }, - "BlobIdDoesNotExistException":{ + "BeforeCommitIdAndAfterCommitIdAreSameException":{ "type":"structure", "members":{ }, - "documentation":"

The specified blob does not exist.

", + "documentation":"

The before commit ID and the after commit ID are the same, which is not valid. The before commit ID and the after commit ID must be different commit IDs.

", "exception":true }, - "BlobIdRequiredException":{ + "BlobIdDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified blob does not exist.

", + "exception":true + }, + "BlobIdRequiredException":{ "type":"structure", "members":{ }, @@ -480,7 +948,7 @@ }, "BranchName":{ "type":"string", - "max":100, + "max":256, "min":1 }, "BranchNameExistsException":{ @@ -509,11 +977,188 @@ "D" ] }, + "ClientRequestToken":{"type":"string"}, + "ClientRequestTokenRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A client request token is required. A client request token is an unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

", + "exception":true + }, "CloneUrlHttp":{"type":"string"}, "CloneUrlSsh":{"type":"string"}, + "Comment":{ + "type":"structure", + "members":{ + "commentId":{ + "shape":"CommentId", + "documentation":"

The system-generated comment ID.

" + }, + "content":{ + "shape":"Content", + "documentation":"

The content of the comment.

" + }, + "inReplyTo":{ + "shape":"CommentId", + "documentation":"

The ID of the comment for which this comment is a reply, if any.

" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"

The date and time the comment was created, in timestamp format.

" + }, + "lastModifiedDate":{ + "shape":"LastModifiedDate", + "documentation":"

The date and time the comment was most recently modified, in timestamp format.

" + }, + "authorArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the person who posted the comment.

" + }, + "deleted":{ + "shape":"IsCommentDeleted", + "documentation":"

A Boolean value indicating whether the comment has been deleted.

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

" + } + }, + "documentation":"

Returns information about a specific comment.

" + }, + "CommentContentRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The comment is empty. You must provide some content for a comment. The content cannot be null.

", + "exception":true + }, + "CommentContentSizeLimitExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The comment is too large. Comments are limited to 1,000 characters.

", + "exception":true + }, + "CommentDeletedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

This comment has already been deleted. You cannot edit or delete a deleted comment.

", + "exception":true + }, + "CommentDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

No comment exists with the provided ID. Verify that you have provided the correct ID, and then try again.

", + "exception":true + }, + "CommentId":{"type":"string"}, + "CommentIdRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The comment ID is missing or null. A comment ID is required.

", + "exception":true + }, + "CommentNotCreatedByCallerException":{ + "type":"structure", + "members":{ + }, + "documentation":"

You cannot modify or delete this comment. Only comment authors can modify or delete their comments.

", + "exception":true + }, + "Comments":{ + "type":"list", + "member":{"shape":"Comment"} + }, + "CommentsForComparedCommit":{ + "type":"structure", + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the compared commits.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit used to establish the 'before' of the comparison.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit used to establish the 'after' of the comparison.

" + }, + "beforeBlobId":{ + "shape":"ObjectId", + "documentation":"

The full blob ID of the commit used to establish the 'before' of the comparison.

" + }, + "afterBlobId":{ + "shape":"ObjectId", + "documentation":"

The full blob ID of the commit used to establish the 'after' of the comparison.

" + }, + "location":{ + "shape":"Location", + "documentation":"

Location information about the comment on the comparison, including the file name, line number, and whether the version of the file where the comment was made is 'BEFORE' or 'AFTER'.

" + }, + "comments":{ + "shape":"Comments", + "documentation":"

An array of comment objects. Each comment object contains information about a comment on the comparison between commits.

" + } + }, + "documentation":"

Returns information about comments on the comparison between two commits.

" + }, + "CommentsForComparedCommitData":{ + "type":"list", + "member":{"shape":"CommentsForComparedCommit"} + }, + "CommentsForPullRequest":{ + "type":"structure", + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the pull request.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit that was the tip of the destination branch when the pull request was created. This commit will be superceded by the after commit in the source branch when and if you merge the source branch into the destination branch.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

he full commit ID of the commit that was the tip of the source branch at the time the comment was made.

" + }, + "beforeBlobId":{ + "shape":"ObjectId", + "documentation":"

The full blob ID of the file on which you want to comment on the destination commit.

" + }, + "afterBlobId":{ + "shape":"ObjectId", + "documentation":"

The full blob ID of the file on which you want to comment on the source commit.

" + }, + "location":{ + "shape":"Location", + "documentation":"

Location information about the comment on the pull request, including the file name, line number, and whether the version of the file where the comment was made is 'BEFORE' (destination branch) or 'AFTER' (source branch).

" + }, + "comments":{ + "shape":"Comments", + "documentation":"

An array of comment objects. Each comment object contains information about a comment on the pull request.

" + } + }, + "documentation":"

Returns information about comments on a pull request.

" + }, + "CommentsForPullRequestData":{ + "type":"list", + "member":{"shape":"CommentsForPullRequest"} + }, "Commit":{ "type":"structure", "members":{ + "commitId":{ + "shape":"ObjectId", + "documentation":"

The full SHA of the specified commit.

" + }, "treeId":{ "shape":"ObjectId", "documentation":"

Tree information for the specified commit.

" @@ -571,6 +1216,7 @@ "documentation":"

A commit was not specified.

", "exception":true }, + "Content":{"type":"string"}, "CreateBranchInput":{ "type":"structure", "required":[ @@ -594,6 +1240,42 @@ }, "documentation":"

Represents the input of a create branch operation.

" }, + "CreatePullRequestInput":{ + "type":"structure", + "required":[ + "title", + "targets" + ], + "members":{ + "title":{ + "shape":"Title", + "documentation":"

The title of the pull request. This title will be used to identify the pull request to other users in the repository.

" + }, + "description":{ + "shape":"Description", + "documentation":"

A description of the pull request.

" + }, + "targets":{ + "shape":"TargetList", + "documentation":"

The targets for the pull request, including the source of the code to be reviewed (the source branch), and the destination where the creator of the pull request intends the code to be merged after the pull request is closed (the destination branch).

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

The AWS SDKs prepopulate client request tokens. If using an AWS SDK, you do not have to generate an idempotency token, as this will be done for you.

", + "idempotencyToken":true + } + } + }, + "CreatePullRequestOutput":{ + "type":"structure", + "required":["pullRequest"], + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the newly created pull request.

" + } + } + }, "CreateRepositoryInput":{ "type":"structure", "required":["repositoryName"], @@ -621,6 +1303,60 @@ }, "CreationDate":{"type":"timestamp"}, "Date":{"type":"string"}, + "DefaultBranchCannotBeDeletedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified branch is the default branch for the repository, and cannot be deleted. To delete this branch, you must first set another branch as the default branch.

", + "exception":true + }, + "DeleteBranchInput":{ + "type":"structure", + "required":[ + "repositoryName", + "branchName" + ], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the branch to be deleted.

" + }, + "branchName":{ + "shape":"BranchName", + "documentation":"

The name of the branch to delete.

" + } + }, + "documentation":"

Represents the input of a delete branch operation.

" + }, + "DeleteBranchOutput":{ + "type":"structure", + "members":{ + "deletedBranch":{ + "shape":"BranchInfo", + "documentation":"

Information about the branch deleted by the operation, including the branch name and the commit ID that was the tip of the branch.

" + } + }, + "documentation":"

Represents the output of a delete branch operation.

" + }, + "DeleteCommentContentInput":{ + "type":"structure", + "required":["commentId"], + "members":{ + "commentId":{ + "shape":"CommentId", + "documentation":"

The unique, system-generated ID of the comment. To get this ID, use GetCommentsForComparedCommit or GetCommentsForPullRequest.

" + } + } + }, + "DeleteCommentContentOutput":{ + "type":"structure", + "members":{ + "comment":{ + "shape":"Comment", + "documentation":"

Information about the comment you just deleted.

" + } + } + }, "DeleteRepositoryInput":{ "type":"structure", "required":["repositoryName"], @@ -642,6 +1378,50 @@ }, "documentation":"

Represents the output of a delete repository operation.

" }, + "DescribePullRequestEventsInput":{ + "type":"structure", + "required":["pullRequestId"], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "pullRequestEventType":{ + "shape":"PullRequestEventType", + "documentation":"

Optional. The pull request event type about which you want to return information.

" + }, + "actorArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the user whose actions resulted in the event. Examples include updating the pull request with additional commits or changing the status of a pull request.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that when provided in a request, returns the next batch of the results.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

A non-negative integer used to limit the number of returned results. The default is 100 events, which is also the maximum number of events that can be returned in a result.

" + } + } + }, + "DescribePullRequestEventsOutput":{ + "type":"structure", + "required":["pullRequestEvents"], + "members":{ + "pullRequestEvents":{ + "shape":"PullRequestEventList", + "documentation":"

Information about the pull request events.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that can be used in a request to return the next batch of the results.

" + } + } + }, + "Description":{ + "type":"string", + "max":10240 + }, "Difference":{ "type":"structure", "members":{ @@ -701,6 +1481,7 @@ "documentation":"

The encryption key is not available.

", "exception":true }, + "EventDate":{"type":"timestamp"}, "FileTooLargeException":{ "type":"structure", "members":{ @@ -761,6 +1542,110 @@ }, "documentation":"

Represents the output of a get branch operation.

" }, + "GetCommentInput":{ + "type":"structure", + "required":["commentId"], + "members":{ + "commentId":{ + "shape":"CommentId", + "documentation":"

The unique, system-generated ID of the comment. To get this ID, use GetCommentsForComparedCommit or GetCommentsForPullRequest.

" + } + } + }, + "GetCommentOutput":{ + "type":"structure", + "members":{ + "comment":{ + "shape":"Comment", + "documentation":"

The contents of the comment.

" + } + } + }, + "GetCommentsForComparedCommitInput":{ + "type":"structure", + "required":[ + "repositoryName", + "afterCommitId" + ], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where you want to compare commits.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

To establish the directionality of the comparison, the full commit ID of the 'before' commit.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

To establish the directionality of the comparison, the full commit ID of the 'after' commit.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that when provided in a request, returns the next batch of the results.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

A non-negative integer used to limit the number of returned results. The default is 100 comments, and is configurable up to 500.

" + } + } + }, + "GetCommentsForComparedCommitOutput":{ + "type":"structure", + "members":{ + "commentsForComparedCommitData":{ + "shape":"CommentsForComparedCommitData", + "documentation":"

A list of comment objects on the compared commit.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that can be used in a request to return the next batch of the results.

" + } + } + }, + "GetCommentsForPullRequestInput":{ + "type":"structure", + "required":["pullRequestId"], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the pull request.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the destination branch that was the tip of the branch at the time the pull request was created.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the source branch that was the tip of the branch at the time the comment was made.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that when provided in a request, returns the next batch of the results.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

A non-negative integer used to limit the number of returned results. The default is 100 comments. You can return up to 500 comments with a single request.

" + } + } + }, + "GetCommentsForPullRequestOutput":{ + "type":"structure", + "members":{ + "commentsForPullRequestData":{ + "shape":"CommentsForPullRequestData", + "documentation":"

An array of comment objects on the pull request.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that can be used in a request to return the next batch of the results.

" + } + } + }, "GetCommitInput":{ "type":"structure", "required":[ @@ -774,7 +1659,7 @@ }, "commitId":{ "shape":"ObjectId", - "documentation":"

The commit ID.

" + "documentation":"

The commit ID. Commit IDs are the full SHA of the commit.

" } }, "documentation":"

Represents the input of a get commit operation.

" @@ -840,18 +1725,87 @@ } } }, - "GetRepositoryInput":{ + "GetMergeConflictsInput":{ "type":"structure", - "required":["repositoryName"], + "required":[ + "repositoryName", + "destinationCommitSpecifier", + "sourceCommitSpecifier", + "mergeOption" + ], "members":{ "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The name of the repository to get information about.

" + "documentation":"

The name of the repository where the pull request was created.

" + }, + "destinationCommitSpecifier":{ + "shape":"CommitName", + "documentation":"

The branch, tag, HEAD, or other fully qualified reference used to identify a commit. For example, a branch name or a full commit ID.

" + }, + "sourceCommitSpecifier":{ + "shape":"CommitName", + "documentation":"

The branch, tag, HEAD, or other fully qualified reference used to identify a commit. For example, a branch name or a full commit ID.

" + }, + "mergeOption":{ + "shape":"MergeOptionTypeEnum", + "documentation":"

The merge option or strategy you want to use to merge the code. The only valid value is FAST_FORWARD_MERGE.

" } - }, - "documentation":"

Represents the input of a get repository operation.

" + } }, - "GetRepositoryOutput":{ + "GetMergeConflictsOutput":{ + "type":"structure", + "required":[ + "mergeable", + "destinationCommitId", + "sourceCommitId" + ], + "members":{ + "mergeable":{ + "shape":"IsMergeable", + "documentation":"

A Boolean value that indicates whether the code is mergable by the specified merge option.

" + }, + "destinationCommitId":{ + "shape":"CommitId", + "documentation":"

The commit ID of the destination commit specifier that was used in the merge evaluation.

" + }, + "sourceCommitId":{ + "shape":"CommitId", + "documentation":"

The commit ID of the source commit specifier that was used in the merge evaluation.

" + } + } + }, + "GetPullRequestInput":{ + "type":"structure", + "required":["pullRequestId"], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + } + } + }, + "GetPullRequestOutput":{ + "type":"structure", + "required":["pullRequest"], + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the specified pull request.

" + } + } + }, + "GetRepositoryInput":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository to get information about.

" + } + }, + "documentation":"

Represents the input of a get repository operation.

" + }, + "GetRepositoryOutput":{ "type":"structure", "members":{ "repositoryMetadata":{ @@ -886,6 +1840,27 @@ }, "documentation":"

Represents the output of a get repository triggers operation.

" }, + "IdempotencyParameterMismatchException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The client request token is not valid. Either the token is not in a valid format, or the token has been used in a previous request and cannot be re-used.

", + "exception":true + }, + "InvalidActorArnException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The Amazon Resource Name (ARN) is not valid. Make sure that you have provided the full ARN for the user who initiated the change for the pull request, and then try again.

", + "exception":true + }, + "InvalidAuthorArnException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The Amazon Resource Name (ARN) is not valid. Make sure that you have provided the full ARN for the author of the pull request, and then try again.

", + "exception":true + }, "InvalidBlobIdException":{ "type":"structure", "members":{ @@ -897,7 +1872,21 @@ "type":"structure", "members":{ }, - "documentation":"

The specified branch name is not valid.

", + "documentation":"

The specified reference name is not valid.

", + "exception":true + }, + "InvalidClientRequestTokenException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The client request token is not valid.

", + "exception":true + }, + "InvalidCommentIdException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The comment ID is not in a valid format. Make sure that you have provided the full comment ID.

", "exception":true }, "InvalidCommitException":{ @@ -921,6 +1910,34 @@ "documentation":"

The specified continuation token is not valid.

", "exception":true }, + "InvalidDescriptionException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request description is not valid. Descriptions are limited to 1,000 characters in length.

", + "exception":true + }, + "InvalidDestinationCommitSpecifierException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The destination commit specifier is not valid. You must provide a valid branch name, tag, or full commit ID.

", + "exception":true + }, + "InvalidFileLocationException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The location of the file is not valid. Make sure that you include the extension of the file as well as the file name.

", + "exception":true + }, + "InvalidFilePositionException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The position is not valid. Make sure that the line number exists in the version of the file you want to comment on.

", + "exception":true + }, "InvalidMaxResultsException":{ "type":"structure", "members":{ @@ -928,6 +1945,13 @@ "documentation":"

The specified number of maximum results is not valid.

", "exception":true }, + "InvalidMergeOptionException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified merge option is not valid. The only valid value is FAST_FORWARD_MERGE.

", + "exception":true + }, "InvalidOrderException":{ "type":"structure", "members":{ @@ -942,6 +1966,48 @@ "documentation":"

The specified path is not valid.

", "exception":true }, + "InvalidPullRequestEventTypeException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request event type is not valid.

", + "exception":true + }, + "InvalidPullRequestIdException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request ID is not valid. Make sure that you have provided the full ID and that the pull request is in the specified repository, and then try again.

", + "exception":true + }, + "InvalidPullRequestStatusException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request status is not valid. The only valid values are OPEN and CLOSED.

", + "exception":true + }, + "InvalidPullRequestStatusUpdateException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request status update is not valid. The only valid update is from OPEN to CLOSED.

", + "exception":true + }, + "InvalidReferenceNameException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified reference name format is not valid. Reference names must conform to the Git references format, for example refs/heads/master. For more information, see Git Internals - Git References or consult your Git documentation.

", + "exception":true + }, + "InvalidRelativeFileVersionEnumException":{ + "type":"structure", + "members":{ + }, + "documentation":"

Either the enum is not in a valid format, or the specified file version enum is not valid in respect to the current file version.

", + "exception":true + }, "InvalidRepositoryDescriptionException":{ "type":"structure", "members":{ @@ -960,161 +2026,721 @@ "type":"structure", "members":{ }, - "documentation":"

One or more branch names specified for the trigger is not valid.

", - "exception":true + "documentation":"

One or more branch names specified for the trigger is not valid.

", + "exception":true + }, + "InvalidRepositoryTriggerCustomDataException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The custom data provided for the trigger is not valid.

", + "exception":true + }, + "InvalidRepositoryTriggerDestinationArnException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The Amazon Resource Name (ARN) for the trigger is not valid for the specified destination. The most common reason for this error is that the ARN does not meet the requirements for the service type.

", + "exception":true + }, + "InvalidRepositoryTriggerEventsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

One or more events specified for the trigger is not valid. Check to make sure that all events specified match the requirements for allowed events.

", + "exception":true + }, + "InvalidRepositoryTriggerNameException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The name of the trigger is not valid.

", + "exception":true + }, + "InvalidRepositoryTriggerRegionException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The region for the trigger target does not match the region for the repository. Triggers must be created in the same region as the target for the trigger.

", + "exception":true + }, + "InvalidSortByException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified sort by value is not valid.

", + "exception":true + }, + "InvalidSourceCommitSpecifierException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The source commit specifier is not valid. You must provide a valid branch name, tag, or full commit ID.

", + "exception":true + }, + "InvalidTargetException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The target for the pull request is not valid. A target must contain the full values for the repository name, source branch, and destination branch for the pull request.

", + "exception":true + }, + "InvalidTargetsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The targets for the pull request is not valid or not in a valid format. Targets are a list of target objects. Each target object must contain the full values for the repository name, source branch, and destination branch for a pull request.

", + "exception":true + }, + "InvalidTitleException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The title of the pull request is not valid. Pull request titles cannot exceed 100 characters in length.

", + "exception":true + }, + "IsCommentDeleted":{"type":"boolean"}, + "IsMergeable":{"type":"boolean"}, + "IsMerged":{"type":"boolean"}, + "LastModifiedDate":{"type":"timestamp"}, + "Limit":{ + "type":"integer", + "box":true + }, + "ListBranchesInput":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the branches.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that allows the operation to batch the results.

" + } + }, + "documentation":"

Represents the input of a list branches operation.

" + }, + "ListBranchesOutput":{ + "type":"structure", + "members":{ + "branches":{ + "shape":"BranchNameList", + "documentation":"

The list of branch names.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that returns the batch of the results.

" + } + }, + "documentation":"

Represents the output of a list branches operation.

" + }, + "ListPullRequestsInput":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository for which you want to list pull requests.

" + }, + "authorArn":{ + "shape":"Arn", + "documentation":"

Optional. The Amazon Resource Name (ARN) of the user who created the pull request. If used, this filters the results to pull requests created by that user.

" + }, + "pullRequestStatus":{ + "shape":"PullRequestStatusEnum", + "documentation":"

Optional. The status of the pull request. If used, this refines the results to the pull requests that match the specified status.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that when provided in a request, returns the next batch of the results.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

A non-negative integer used to limit the number of returned results.

" + } + } + }, + "ListPullRequestsOutput":{ + "type":"structure", + "required":["pullRequestIds"], + "members":{ + "pullRequestIds":{ + "shape":"PullRequestIdList", + "documentation":"

The system-generated IDs of the pull requests.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that when provided in a request, returns the next batch of the results.

" + } + } + }, + "ListRepositoriesInput":{ + "type":"structure", + "members":{ + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that allows the operation to batch the results of the operation. Batch sizes are 1,000 for list repository operations. When the client sends the token back to AWS CodeCommit, another page of 1,000 records is retrieved.

" + }, + "sortBy":{ + "shape":"SortByEnum", + "documentation":"

The criteria used to sort the results of a list repositories operation.

" + }, + "order":{ + "shape":"OrderEnum", + "documentation":"

The order in which to sort the results of a list repositories operation.

" + } + }, + "documentation":"

Represents the input of a list repositories operation.

" + }, + "ListRepositoriesOutput":{ + "type":"structure", + "members":{ + "repositories":{ + "shape":"RepositoryNameIdPairList", + "documentation":"

Lists the repositories called by the list repositories operation.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

An enumeration token that allows the operation to batch the results of the operation. Batch sizes are 1,000 for list repository operations. When the client sends the token back to AWS CodeCommit, another page of 1,000 records is retrieved.

" + } + }, + "documentation":"

Represents the output of a list repositories operation.

" + }, + "Location":{ + "type":"structure", + "members":{ + "filePath":{ + "shape":"Path", + "documentation":"

The name of the file being compared, including its extension and subdirectory, if any.

" + }, + "filePosition":{ + "shape":"Position", + "documentation":"

The position of a change within a compared file, in line number format.

" + }, + "relativeFileVersion":{ + "shape":"RelativeFileVersionEnum", + "documentation":"

In a comparison of commits or a pull request, whether the change is in the 'before' or 'after' of that comparison.

" + } + }, + "documentation":"

Returns information about the location of a change or comment in the comparison between two commits or a pull request.

" + }, + "ManualMergeRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The pull request cannot be merged automatically into the destination branch. You must manually merge the branches and resolve any conflicts.

", + "exception":true + }, + "MaxResults":{"type":"integer"}, + "MaximumBranchesExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The number of branches for the trigger was exceeded.

", + "exception":true + }, + "MaximumOpenPullRequestsExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

You cannot create the pull request because the repository has too many open pull requests. The maximum number of open pull requests for a repository is 1,000. Close one or more open pull requests, and then try again.

", + "exception":true + }, + "MaximumRepositoryNamesExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The maximum number of allowed repository names was exceeded. Currently, this number is 25.

", + "exception":true + }, + "MaximumRepositoryTriggersExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The number of triggers allowed for the repository was exceeded.

", + "exception":true + }, + "MergeMetadata":{ + "type":"structure", + "members":{ + "isMerged":{ + "shape":"IsMerged", + "documentation":"

A Boolean value indicating whether the merge has been made.

" + }, + "mergedBy":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the user who merged the branches.

" + } + }, + "documentation":"

Returns information about a merge or potential merge between a source reference and a destination reference in a pull request.

" + }, + "MergeOptionRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A merge option or stategy is required, and none was provided.

", + "exception":true + }, + "MergeOptionTypeEnum":{ + "type":"string", + "enum":["FAST_FORWARD_MERGE"] + }, + "MergePullRequestByFastForwardInput":{ + "type":"structure", + "required":[ + "pullRequestId", + "repositoryName" + ], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where the pull request was created.

" + }, + "sourceCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the original or updated commit in the pull request source branch. Pass this value if you want an exception thrown if the current commit ID of the tip of the source branch does not match this commit ID.

" + } + } + }, + "MergePullRequestByFastForwardOutput":{ + "type":"structure", + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the specified pull request, including information about the merge.

" + } + } + }, + "Message":{"type":"string"}, + "Mode":{"type":"string"}, + "MultipleRepositoriesInPullRequestException":{ + "type":"structure", + "members":{ + }, + "documentation":"

You cannot include more than one repository in a pull request. Make sure you have specified only one repository name in your request, and then try again.

", + "exception":true + }, + "Name":{"type":"string"}, + "NextToken":{"type":"string"}, + "ObjectId":{"type":"string"}, + "OrderEnum":{ + "type":"string", + "enum":[ + "ascending", + "descending" + ] + }, + "ParentList":{ + "type":"list", + "member":{"shape":"ObjectId"} + }, + "Path":{"type":"string"}, + "PathDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified path does not exist.

", + "exception":true + }, + "PathRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The filePath for a location cannot be empty or null.

", + "exception":true + }, + "Position":{"type":"long"}, + "PostCommentForComparedCommitInput":{ + "type":"structure", + "required":[ + "repositoryName", + "afterCommitId", + "content" + ], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where you want to post a comment on the comparison between commits.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

To establish the directionality of the comparison, the full commit ID of the 'before' commit.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

To establish the directionality of the comparison, the full commit ID of the 'after' commit.

" + }, + "location":{ + "shape":"Location", + "documentation":"

The location of the comparison where you want to comment.

" + }, + "content":{ + "shape":"Content", + "documentation":"

The content of the comment you want to make.

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

", + "idempotencyToken":true + } + } + }, + "PostCommentForComparedCommitOutput":{ + "type":"structure", + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where you posted a comment on the comparison between commits.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

In the directionality you established, the full commit ID of the 'before' commit.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

In the directionality you established, the full commit ID of the 'after' commit.

" + }, + "beforeBlobId":{ + "shape":"ObjectId", + "documentation":"

In the directionality you established, the blob ID of the 'before' blob.

" + }, + "afterBlobId":{ + "shape":"ObjectId", + "documentation":"

In the directionality you established, the blob ID of the 'after' blob.

" + }, + "location":{ + "shape":"Location", + "documentation":"

The location of the comment in the comparison between the two commits.

" + }, + "comment":{ + "shape":"Comment", + "documentation":"

The content of the comment you posted.

" + } + } + }, + "PostCommentForPullRequestInput":{ + "type":"structure", + "required":[ + "pullRequestId", + "repositoryName", + "beforeCommitId", + "afterCommitId", + "content" + ], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where you want to post a comment on a pull request.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the destination branch that was the tip of the branch at the time the pull request was created.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the source branch that is the current tip of the branch for the pull request when you post the comment.

" + }, + "location":{ + "shape":"Location", + "documentation":"

The location of the change where you want to post your comment. If no location is provided, the comment will be posted as a general comment on the pull request difference between the before commit ID and the after commit ID.

" + }, + "content":{ + "shape":"Content", + "documentation":"

The content of your comment on the change.

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

", + "idempotencyToken":true + } + } + }, + "PostCommentForPullRequestOutput":{ + "type":"structure", + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where you posted a comment on a pull request.

" + }, + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request.

" + }, + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the source branch used to create the pull request, or in the case of an updated pull request, the full commit ID of the commit used to update the pull request.

" + }, + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the destination branch where the pull request will be merged.

" + }, + "beforeBlobId":{ + "shape":"ObjectId", + "documentation":"

In the directionality of the pull request, the blob ID of the 'before' blob.

" + }, + "afterBlobId":{ + "shape":"ObjectId", + "documentation":"

In the directionality of the pull request, the blob ID of the 'after' blob.

" + }, + "location":{ + "shape":"Location", + "documentation":"

The location of the change where you posted your comment.

" + }, + "comment":{ + "shape":"Comment", + "documentation":"

The content of the comment you posted.

" + } + } + }, + "PostCommentReplyInput":{ + "type":"structure", + "required":[ + "inReplyTo", + "content" + ], + "members":{ + "inReplyTo":{ + "shape":"CommentId", + "documentation":"

The system-generated ID of the comment to which you want to reply. To get this ID, use GetCommentsForComparedCommit or GetCommentsForPullRequest.

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

", + "idempotencyToken":true + }, + "content":{ + "shape":"Content", + "documentation":"

The contents of your reply to a comment.

" + } + } + }, + "PostCommentReplyOutput":{ + "type":"structure", + "members":{ + "comment":{ + "shape":"Comment", + "documentation":"

Information about the reply to a comment.

" + } + } + }, + "PullRequest":{ + "type":"structure", + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request.

" + }, + "title":{ + "shape":"Title", + "documentation":"

The user-defined title of the pull request. This title is displayed in the list of pull requests to other users of the repository.

" + }, + "description":{ + "shape":"Description", + "documentation":"

The user-defined description of the pull request. This description can be used to clarify what should be reviewed and other details of the request.

" + }, + "lastActivityDate":{ + "shape":"LastModifiedDate", + "documentation":"

The day and time of the last user or system activity on the pull request, in timestamp format.

" + }, + "creationDate":{ + "shape":"CreationDate", + "documentation":"

The date and time the pull request was originally created, in timestamp format.

" + }, + "pullRequestStatus":{ + "shape":"PullRequestStatusEnum", + "documentation":"

The status of the pull request. Pull request status can only change from OPEN to CLOSED.

" + }, + "authorArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the user who created the pull request.

" + }, + "pullRequestTargets":{ + "shape":"PullRequestTargetList", + "documentation":"

The targets of the pull request, including the source branch and destination branch for the pull request.

" + }, + "clientRequestToken":{ + "shape":"ClientRequestToken", + "documentation":"

A unique, client-generated idempotency token that when provided in a request, ensures the request cannot be repeated with a changed parameter. If a request is received with the same parameters and a token is included, the request will return information about the initial request that used that token.

" + } + }, + "documentation":"

Returns information about a pull request.

" }, - "InvalidRepositoryTriggerCustomDataException":{ + "PullRequestAlreadyClosedException":{ "type":"structure", "members":{ }, - "documentation":"

The custom data provided for the trigger is not valid.

", + "documentation":"

The pull request status cannot be updated because it is already closed.

", "exception":true }, - "InvalidRepositoryTriggerDestinationArnException":{ + "PullRequestDoesNotExistException":{ "type":"structure", "members":{ }, - "documentation":"

The Amazon Resource Name (ARN) for the trigger is not valid for the specified destination. The most common reason for this error is that the ARN does not meet the requirements for the service type.

", + "documentation":"

The pull request ID could not be found. Make sure that you have specified the correct repository name and pull request ID, and then try again.

", "exception":true }, - "InvalidRepositoryTriggerEventsException":{ + "PullRequestEvent":{ "type":"structure", "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request.

" + }, + "eventDate":{ + "shape":"EventDate", + "documentation":"

The day and time of the pull request event, in timestamp format.

" + }, + "pullRequestEventType":{ + "shape":"PullRequestEventType", + "documentation":"

The type of the pull request event, for example a status change event (PULL_REQUEST_STATUS_CHANGED) or update event (PULL_REQUEST_SOURCE_REFERENCE_UPDATED).

" + }, + "actorArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the user whose actions resulted in the event. Examples include updating the pull request with additional commits or changing the status of a pull request.

" + }, + "pullRequestStatusChangedEventMetadata":{ + "shape":"PullRequestStatusChangedEventMetadata", + "documentation":"

Information about the change in status for the pull request event.

" + }, + "pullRequestSourceReferenceUpdatedEventMetadata":{ + "shape":"PullRequestSourceReferenceUpdatedEventMetadata", + "documentation":"

Information about the updated source branch for the pull request event.

" + }, + "pullRequestMergedStateChangedEventMetadata":{ + "shape":"PullRequestMergedStateChangedEventMetadata", + "documentation":"

Information about the change in mergability state for the pull request event.

" + } }, - "documentation":"

One or more events specified for the trigger is not valid. Check to make sure that all events specified match the requirements for allowed events.

", - "exception":true + "documentation":"

Returns information about a pull request event.

" }, - "InvalidRepositoryTriggerNameException":{ - "type":"structure", - "members":{ - }, - "documentation":"

The name of the trigger is not valid.

", - "exception":true + "PullRequestEventList":{ + "type":"list", + "member":{"shape":"PullRequestEvent"} }, - "InvalidRepositoryTriggerRegionException":{ - "type":"structure", - "members":{ - }, - "documentation":"

The region for the trigger target does not match the region for the repository. Triggers must be created in the same region as the target for the trigger.

", - "exception":true + "PullRequestEventType":{ + "type":"string", + "enum":[ + "PULL_REQUEST_CREATED", + "PULL_REQUEST_STATUS_CHANGED", + "PULL_REQUEST_SOURCE_REFERENCE_UPDATED", + "PULL_REQUEST_MERGE_STATE_CHANGED" + ] }, - "InvalidSortByException":{ + "PullRequestId":{"type":"string"}, + "PullRequestIdList":{ + "type":"list", + "member":{"shape":"PullRequestId"} + }, + "PullRequestIdRequiredException":{ "type":"structure", "members":{ }, - "documentation":"

The specified sort by value is not valid.

", + "documentation":"

A pull request ID is required, but none was provided.

", "exception":true }, - "LastModifiedDate":{"type":"timestamp"}, - "Limit":{ - "type":"integer", - "box":true - }, - "ListBranchesInput":{ + "PullRequestMergedStateChangedEventMetadata":{ "type":"structure", - "required":["repositoryName"], "members":{ "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The name of the repository that contains the branches.

" + "documentation":"

The name of the repository where the pull request was created.

" }, - "nextToken":{ - "shape":"NextToken", - "documentation":"

An enumeration token that allows the operation to batch the results.

" - } - }, - "documentation":"

Represents the input of a list branches operation.

" - }, - "ListBranchesOutput":{ - "type":"structure", - "members":{ - "branches":{ - "shape":"BranchNameList", - "documentation":"

The list of branch names.

" + "destinationReference":{ + "shape":"ReferenceName", + "documentation":"

The name of the branch that the pull request will be merged into.

" }, - "nextToken":{ - "shape":"NextToken", - "documentation":"

An enumeration token that returns the batch of the results.

" + "mergeMetadata":{ + "shape":"MergeMetadata", + "documentation":"

Information about the merge state change event.

" } }, - "documentation":"

Represents the output of a list branches operation.

" + "documentation":"

Returns information about the change in the merge state for a pull request event.

" }, - "ListRepositoriesInput":{ + "PullRequestSourceReferenceUpdatedEventMetadata":{ "type":"structure", "members":{ - "nextToken":{ - "shape":"NextToken", - "documentation":"

An enumeration token that allows the operation to batch the results of the operation. Batch sizes are 1,000 for list repository operations. When the client sends the token back to AWS CodeCommit, another page of 1,000 records is retrieved.

" + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository where the pull request was updated.

" }, - "sortBy":{ - "shape":"SortByEnum", - "documentation":"

The criteria used to sort the results of a list repositories operation.

" + "beforeCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the destination branch that was the tip of the branch at the time the pull request was updated.

" }, - "order":{ - "shape":"OrderEnum", - "documentation":"

The order in which to sort the results of a list repositories operation.

" + "afterCommitId":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the commit in the source branch that was the tip of the branch at the time the pull request was updated.

" } }, - "documentation":"

Represents the input of a list repositories operation.

" + "documentation":"

Information about an update to the source branch of a pull request.

" }, - "ListRepositoriesOutput":{ + "PullRequestStatusChangedEventMetadata":{ "type":"structure", "members":{ - "repositories":{ - "shape":"RepositoryNameIdPairList", - "documentation":"

Lists the repositories called by the list repositories operation.

" - }, - "nextToken":{ - "shape":"NextToken", - "documentation":"

An enumeration token that allows the operation to batch the results of the operation. Batch sizes are 1,000 for list repository operations. When the client sends the token back to AWS CodeCommit, another page of 1,000 records is retrieved.

" + "pullRequestStatus":{ + "shape":"PullRequestStatusEnum", + "documentation":"

The changed status of the pull request.

" } }, - "documentation":"

Represents the output of a list repositories operation.

" + "documentation":"

Information about a change to the status of a pull request.

" }, - "MaximumBranchesExceededException":{ - "type":"structure", - "members":{ - }, - "documentation":"

The number of branches for the trigger was exceeded.

", - "exception":true + "PullRequestStatusEnum":{ + "type":"string", + "enum":[ + "OPEN", + "CLOSED" + ] }, - "MaximumRepositoryNamesExceededException":{ + "PullRequestStatusRequiredException":{ "type":"structure", "members":{ }, - "documentation":"

The maximum number of allowed repository names was exceeded. Currently, this number is 25.

", + "documentation":"

A pull request status is required, but none was provided.

", "exception":true }, - "MaximumRepositoryTriggersExceededException":{ + "PullRequestTarget":{ "type":"structure", "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the pull request source and destination branches.

" + }, + "sourceReference":{ + "shape":"ReferenceName", + "documentation":"

The branch of the repository that contains the changes for the pull request. Also known as the source branch.

" + }, + "destinationReference":{ + "shape":"ReferenceName", + "documentation":"

The branch of the repository where the pull request changes will be merged into. Also known as the destination branch.

" + }, + "destinationCommit":{ + "shape":"CommitId", + "documentation":"

The full commit ID that is the tip of the destination branch. This is the commit where the pull request was or will be merged.

" + }, + "sourceCommit":{ + "shape":"CommitId", + "documentation":"

The full commit ID of the tip of the source branch used to create the pull request. If the pull request branch is updated by a push while the pull request is open, the commit ID will change to reflect the new tip of the branch.

" + }, + "mergeMetadata":{ + "shape":"MergeMetadata", + "documentation":"

Returns metadata about the state of the merge, including whether the merge has been made.

" + } }, - "documentation":"

The number of triggers allowed for the repository was exceeded.

", - "exception":true - }, - "Message":{"type":"string"}, - "Mode":{"type":"string"}, - "Name":{"type":"string"}, - "NextToken":{"type":"string"}, - "ObjectId":{"type":"string"}, - "OrderEnum":{ - "type":"string", - "enum":[ - "ascending", - "descending" - ] + "documentation":"

Returns information about a pull request target.

" }, - "ParentList":{ + "PullRequestTargetList":{ "type":"list", - "member":{"shape":"ObjectId"} - }, - "Path":{"type":"string"}, - "PathDoesNotExistException":{ - "type":"structure", - "members":{ - }, - "documentation":"

The specified path does not exist.

", - "exception":true + "member":{"shape":"PullRequestTarget"} }, "PutRepositoryTriggersInput":{ "type":"structure", @@ -1144,6 +2770,35 @@ }, "documentation":"

Represents the output of a put repository triggers operation.

" }, + "ReferenceDoesNotExistException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified reference does not exist. You must provide a full commit ID.

", + "exception":true + }, + "ReferenceName":{"type":"string"}, + "ReferenceNameRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A reference name is required, but none was provided.

", + "exception":true + }, + "ReferenceTypeNotSupportedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified reference is not a supported type.

", + "exception":true + }, + "RelativeFileVersionEnum":{ + "type":"string", + "enum":[ + "BEFORE", + "AFTER" + ] + }, "RepositoryDescription":{ "type":"string", "max":1000 @@ -1262,6 +2917,13 @@ "documentation":"

A repository names object is required but was not specified.

", "exception":true }, + "RepositoryNotAssociatedWithPullRequestException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The repository does not contain any pull requests with that pull request ID. Check to make sure you have provided the correct repository name for the pull request.

", + "exception":true + }, "RepositoryNotFoundList":{ "type":"list", "member":{"shape":"RepositoryName"} @@ -1288,7 +2950,7 @@ }, "branches":{ "shape":"BranchNameList", - "documentation":"

The branches that will be included in the trigger configuration. If no branches are specified, the trigger will apply to all branches.

" + "documentation":"

The branches that will be included in the trigger configuration. If you specify an empty array, the trigger will apply to all branches.

While no content is required in the array, you must include the array itself.

" }, "events":{ "shape":"RepositoryTriggerEventList", @@ -1382,6 +3044,53 @@ "lastModifiedDate" ] }, + "SourceAndDestinationAreSameException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The source branch and the destination branch for the pull request are the same. You must specify different branches for the source and destination.

", + "exception":true + }, + "Target":{ + "type":"structure", + "required":[ + "repositoryName", + "sourceReference" + ], + "members":{ + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that contains the pull request.

" + }, + "sourceReference":{ + "shape":"ReferenceName", + "documentation":"

The branch of the repository that contains the changes for the pull request. Also known as the source branch.

" + }, + "destinationReference":{ + "shape":"ReferenceName", + "documentation":"

The branch of the repository where the pull request changes will be merged into. Also known as the destination branch.

" + } + }, + "documentation":"

Returns information about a target for a pull request.

" + }, + "TargetList":{ + "type":"list", + "member":{"shape":"Target"} + }, + "TargetRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A pull request target is required. It cannot be empty or null. A pull request target must contain the full values for the repository name, source branch, and destination branch for the pull request.

", + "exception":true + }, + "TargetsRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

An array of target objects is required. It cannot be empty or null.

", + "exception":true + }, "TestRepositoryTriggersInput":{ "type":"structure", "required":[ @@ -1414,6 +3123,57 @@ }, "documentation":"

Represents the output of a test repository triggers operation.

" }, + "TipOfSourceReferenceIsDifferentException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The tip of the source branch in the destination repository does not match the tip of the source branch specified in your request. The pull request might have been updated. Make sure that you have the latest changes.

", + "exception":true + }, + "TipsDivergenceExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The divergence between the tips of the provided commit specifiers is too great to determine whether there might be any merge conflicts. Locally compare the specifiers using git diff or a diff tool.

", + "exception":true + }, + "Title":{ + "type":"string", + "max":150 + }, + "TitleRequiredException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A pull request title is required. It cannot be empty or null.

", + "exception":true + }, + "UpdateCommentInput":{ + "type":"structure", + "required":[ + "commentId", + "content" + ], + "members":{ + "commentId":{ + "shape":"CommentId", + "documentation":"

The system-generated ID of the comment you want to update. To get this ID, use GetCommentsForComparedCommit or GetCommentsForPullRequest.

" + }, + "content":{ + "shape":"Content", + "documentation":"

The updated content with which you want to replace the existing content of the comment.

" + } + } + }, + "UpdateCommentOutput":{ + "type":"structure", + "members":{ + "comment":{ + "shape":"Comment", + "documentation":"

Information about the updated comment.

" + } + } + }, "UpdateDefaultBranchInput":{ "type":"structure", "required":[ @@ -1432,6 +3192,87 @@ }, "documentation":"

Represents the input of an update default branch operation.

" }, + "UpdatePullRequestDescriptionInput":{ + "type":"structure", + "required":[ + "pullRequestId", + "description" + ], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "description":{ + "shape":"Description", + "documentation":"

The updated content of the description for the pull request. This content will replace the existing description.

" + } + } + }, + "UpdatePullRequestDescriptionOutput":{ + "type":"structure", + "required":["pullRequest"], + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the updated pull request.

" + } + } + }, + "UpdatePullRequestStatusInput":{ + "type":"structure", + "required":[ + "pullRequestId", + "pullRequestStatus" + ], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "pullRequestStatus":{ + "shape":"PullRequestStatusEnum", + "documentation":"

The status of the pull request. The only valid operations are to update the status from OPEN to OPEN, OPEN to CLOSED or from from CLOSED to CLOSED.

" + } + } + }, + "UpdatePullRequestStatusOutput":{ + "type":"structure", + "required":["pullRequest"], + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the pull request.

" + } + } + }, + "UpdatePullRequestTitleInput":{ + "type":"structure", + "required":[ + "pullRequestId", + "title" + ], + "members":{ + "pullRequestId":{ + "shape":"PullRequestId", + "documentation":"

The system-generated ID of the pull request. To get this ID, use ListPullRequests.

" + }, + "title":{ + "shape":"Title", + "documentation":"

The updated title of the pull request. This will replace the existing title.

" + } + } + }, + "UpdatePullRequestTitleOutput":{ + "type":"structure", + "required":["pullRequest"], + "members":{ + "pullRequest":{ + "shape":"PullRequest", + "documentation":"

Information about the updated pull request.

" + } + } + }, "UpdateRepositoryDescriptionInput":{ "type":"structure", "required":["repositoryName"], @@ -1485,5 +3326,5 @@ }, "blob":{"type":"blob"} }, - "documentation":"AWS CodeCommit

This is the AWS CodeCommit API Reference. This reference provides descriptions of the operations and data types for AWS CodeCommit API along with usage examples.

You can use the AWS CodeCommit API to work with the following objects:

Repositories, by calling the following:

Branches, by calling the following:

Information about committed code in a repository, by calling the following:

Triggers, by calling the following:

For information about how to use AWS CodeCommit, see the AWS CodeCommit User Guide.

" + "documentation":"AWS CodeCommit

This is the AWS CodeCommit API Reference. This reference provides descriptions of the operations and data types for AWS CodeCommit API along with usage examples.

You can use the AWS CodeCommit API to work with the following objects:

Repositories, by calling the following:

Branches, by calling the following:

Information about committed code in a repository, by calling the following:

Pull requests, by calling the following:

Information about comments in a repository, by calling the following:

Triggers, by calling the following:

For information about how to use AWS CodeCommit, see the AWS CodeCommit User Guide.

" } diff --git a/services/codedeploy/src/main/resources/codegen-resources/service-2.json b/services/codedeploy/src/main/resources/codegen-resources/service-2.json index b9fcceecab5c..37166796f7dc 100644 --- a/services/codedeploy/src/main/resources/codegen-resources/service-2.json +++ b/services/codedeploy/src/main/resources/codegen-resources/service-2.json @@ -240,7 +240,10 @@ {"shape":"InvalidAutoRollbackConfigException"}, {"shape":"InvalidLoadBalancerInfoException"}, {"shape":"InvalidDeploymentStyleException"}, - {"shape":"InvalidBlueGreenDeploymentConfigurationException"} + {"shape":"InvalidBlueGreenDeploymentConfigurationException"}, + {"shape":"InvalidEC2TagCombinationException"}, + {"shape":"InvalidOnPremisesTagCombinationException"}, + {"shape":"TagSetListLimitExceededException"} ], "documentation":"

Creates a deployment group to which application revisions will be deployed.

" }, @@ -683,7 +686,10 @@ {"shape":"InvalidAutoRollbackConfigException"}, {"shape":"InvalidLoadBalancerInfoException"}, {"shape":"InvalidDeploymentStyleException"}, - {"shape":"InvalidBlueGreenDeploymentConfigurationException"} + {"shape":"InvalidBlueGreenDeploymentConfigurationException"}, + {"shape":"InvalidEC2TagCombinationException"}, + {"shape":"InvalidOnPremisesTagCombinationException"}, + {"shape":"TagSetListLimitExceededException"} ], "documentation":"

Changes information about a deployment group.

" } @@ -1122,7 +1128,10 @@ }, "CreateDeploymentConfigInput":{ "type":"structure", - "required":["deploymentConfigName"], + "required":[ + "deploymentConfigName", + "minimumHealthyHosts" + ], "members":{ "deploymentConfigName":{ "shape":"DeploymentConfigName", @@ -1167,11 +1176,11 @@ }, "ec2TagFilters":{ "shape":"EC2TagFilterList", - "documentation":"

The Amazon EC2 tags on which to filter. The deployment group will include EC2 instances with any of the specified tags.

" + "documentation":"

The Amazon EC2 tags on which to filter. The deployment group will include EC2 instances with any of the specified tags. Cannot be used in the same call as ec2TagSet.

" }, "onPremisesInstanceTagFilters":{ "shape":"TagFilterList", - "documentation":"

The on-premises instance tags on which to filter. The deployment group will include on-premises instances with any of the specified tags.

" + "documentation":"

The on-premises instance tags on which to filter. The deployment group will include on-premises instances with any of the specified tags. Cannot be used in the same call as OnPremisesTagSet.

" }, "autoScalingGroups":{ "shape":"AutoScalingGroupNameList", @@ -1204,6 +1213,14 @@ "loadBalancerInfo":{ "shape":"LoadBalancerInfo", "documentation":"

Information about the load balancer used in a deployment.

" + }, + "ec2TagSet":{ + "shape":"EC2TagSet", + "documentation":"

Information about groups of tags applied to EC2 instances. The deployment group will include only EC2 instances identified by all the tag groups. Cannot be used in the same call as ec2TagFilters.

" + }, + "onPremisesTagSet":{ + "shape":"OnPremisesTagSet", + "documentation":"

Information about groups of tags applied to on-premises instances. The deployment group will include only on-premises instances identified by all the tag groups. Cannot be used in the same call as onPremisesInstanceTagFilters.

" } }, "documentation":"

Represents the input of a CreateDeploymentGroup operation.

" @@ -1450,11 +1467,11 @@ }, "ec2TagFilters":{ "shape":"EC2TagFilterList", - "documentation":"

The Amazon EC2 tags on which to filter.

" + "documentation":"

The Amazon EC2 tags on which to filter. The deployment group includes EC2 instances with any of the specified tags.

" }, "onPremisesInstanceTagFilters":{ "shape":"TagFilterList", - "documentation":"

The on-premises instance tags on which to filter.

" + "documentation":"

The on-premises instance tags on which to filter. The deployment group includes on-premises instances with any of the specified tags.

" }, "autoScalingGroups":{ "shape":"AutoScalingGroupList", @@ -1499,6 +1516,14 @@ "lastAttemptedDeployment":{ "shape":"LastDeploymentInfo", "documentation":"

Information about the most recent attempted deployment to the deployment group.

" + }, + "ec2TagSet":{ + "shape":"EC2TagSet", + "documentation":"

Information about groups of tags applied to an EC2 instance. The deployment group includes only EC2 instances identified by all the tag groups. Cannot be used in the same call as ec2TagFilters.

" + }, + "onPremisesTagSet":{ + "shape":"OnPremisesTagSet", + "documentation":"

Information about groups of tags applied to an on-premises instance. The deployment group includes only on-premises instances identified by all the tag groups. Cannot be used in the same call as onPremisesInstanceTagFilters.

" } }, "documentation":"

Information about a deployment group.

" @@ -1840,15 +1865,29 @@ "KEY_AND_VALUE" ] }, + "EC2TagSet":{ + "type":"structure", + "members":{ + "ec2TagSetList":{ + "shape":"EC2TagSetList", + "documentation":"

A list containing other lists of EC2 instance tag groups. In order for an instance to be included in the deployment group, it must be identified by all the tag groups in the list.

" + } + }, + "documentation":"

Information about groups of EC2 instance tags.

" + }, + "EC2TagSetList":{ + "type":"list", + "member":{"shape":"EC2TagFilterList"} + }, "ELBInfo":{ "type":"structure", "members":{ "name":{ "shape":"ELBName", - "documentation":"

For blue/green deployments, the name of the load balancer that will be used to route traffic from original instances to replacement instances in a blue/green deployment. For in-place deployments, the name of the load balancer that instances are deregistered from so they are not serving traffic during a deployment, and then re-registered with after the deployment completes.

" + "documentation":"

For blue/green deployments, the name of the load balancer that will be used to route traffic from original instances to replacement instances in a blue/green deployment. For in-place deployments, the name of the load balancer that instances are deregistered from, so they are not serving traffic during a deployment, and then re-registered with after the deployment completes.

" } }, - "documentation":"

Information about a load balancer in Elastic Load Balancing to use in a deployment.

" + "documentation":"

Information about a load balancer in Elastic Load Balancing to use in a deployment. Instances are registered directly with a load balancer, and traffic is routed to the load balancer.

" }, "ELBInfoList":{ "type":"list", @@ -2428,6 +2467,13 @@ "documentation":"

An invalid deployment style was specified. Valid deployment types include \"IN_PLACE\" and \"BLUE_GREEN\". Valid deployment options include \"WITH_TRAFFIC_CONTROL\" and \"WITHOUT_TRAFFIC_CONTROL\".

", "exception":true }, + "InvalidEC2TagCombinationException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A call was submitted that specified both Ec2TagFilters and Ec2TagSet, but only one of these data types can be used in a single call.

", + "exception":true + }, "InvalidEC2TagException":{ "type":"structure", "members":{ @@ -2505,6 +2551,13 @@ "documentation":"

The next token was specified in an invalid format.

", "exception":true }, + "InvalidOnPremisesTagCombinationException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A call was submitted that specified both OnPremisesTagFilters and OnPremisesTagSet, but only one of these data types can be used in a single call.

", + "exception":true + }, "InvalidOperationException":{ "type":"structure", "members":{ @@ -2942,10 +2995,14 @@ "members":{ "elbInfoList":{ "shape":"ELBInfoList", - "documentation":"

An array containing information about the load balancer in Elastic Load Balancing to use in a deployment.

" + "documentation":"

An array containing information about the load balancer to use for load balancing in a deployment. In Elastic Load Balancing, load balancers are used with Classic Load Balancers.

" + }, + "targetGroupInfoList":{ + "shape":"TargetGroupInfoList", + "documentation":"

An array containing information about the target group to use for load balancing in a deployment. In Elastic Load Balancing, target groups are used with Application Load Balancers.

" } }, - "documentation":"

Information about the load balancer used in a deployment.

" + "documentation":"

Information about the Elastic Load Balancing load balancer or target group used in a deployment.

" }, "LogTail":{"type":"string"}, "Message":{"type":"string"}, @@ -2980,6 +3037,20 @@ }, "NextToken":{"type":"string"}, "NullableBoolean":{"type":"boolean"}, + "OnPremisesTagSet":{ + "type":"structure", + "members":{ + "onPremisesTagSetList":{ + "shape":"OnPremisesTagSetList", + "documentation":"

A list containing other lists of on-premises instance tag groups. In order for an instance to be included in the deployment group, it must be identified by all the tag groups in the list.

" + } + }, + "documentation":"

Information about groups of on-premises instance tags.

" + }, + "OnPremisesTagSetList":{ + "type":"list", + "member":{"shape":"TagFilterList"} + }, "RegisterApplicationRevisionInput":{ "type":"structure", "required":[ @@ -3284,16 +3355,42 @@ "documentation":"

A tag was not specified.

", "exception":true }, + "TagSetListLimitExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The number of tag groups included in the tag set list exceeded the maximum allowed limit of 3.

", + "exception":true + }, + "TargetGroupInfo":{ + "type":"structure", + "members":{ + "name":{ + "shape":"TargetGroupName", + "documentation":"

For blue/green deployments, the name of the target group that instances in the original environment are deregistered from, and instances in the replacement environment registered with. For in-place deployments, the name of the target group that instances are deregistered from, so they are not serving traffic during a deployment, and then re-registered with after the deployment completes.

" + } + }, + "documentation":"

Information about a target group in Elastic Load Balancing to use in a deployment. Instances are registered as targets in a target group, and traffic is routed to the target group.

" + }, + "TargetGroupInfoList":{ + "type":"list", + "member":{"shape":"TargetGroupInfo"} + }, + "TargetGroupName":{"type":"string"}, "TargetInstances":{ "type":"structure", "members":{ "tagFilters":{ "shape":"EC2TagFilterList", - "documentation":"

The tag filter key, type, and value used to identify Amazon EC2 instances in a replacement environment for a blue/green deployment.

" + "documentation":"

The tag filter key, type, and value used to identify Amazon EC2 instances in a replacement environment for a blue/green deployment. Cannot be used in the same call as ec2TagSet.

" }, "autoScalingGroups":{ "shape":"AutoScalingGroupNameList", "documentation":"

The names of one or more Auto Scaling groups to identify a replacement environment for a blue/green deployment.

" + }, + "ec2TagSet":{ + "shape":"EC2TagSet", + "documentation":"

Information about the groups of EC2 instance tags that an instance must be identified by in order for it to be included in the replacement environment for a blue/green deployment. Cannot be used in the same call as tagFilters.

" } }, "documentation":"

Information about the instances to be used in the replacement environment in a blue/green deployment.

" @@ -3446,6 +3543,14 @@ "loadBalancerInfo":{ "shape":"LoadBalancerInfo", "documentation":"

Information about the load balancer used in a deployment.

" + }, + "ec2TagSet":{ + "shape":"EC2TagSet", + "documentation":"

Information about groups of tags applied to on-premises instances. The deployment group will include only EC2 instances identified by all the tag groups.

" + }, + "onPremisesTagSet":{ + "shape":"OnPremisesTagSet", + "documentation":"

Information about an on-premises instance tag set. The deployment group will include only on-premises instances identified by all the tag groups.

" } }, "documentation":"

Represents the input of an UpdateDeploymentGroup operation.

" diff --git a/services/codepipeline/src/main/resources/codegen-resources/service-2.json b/services/codepipeline/src/main/resources/codegen-resources/service-2.json index 7e3019ec3f8a..940aa20a7f84 100644 --- a/services/codepipeline/src/main/resources/codegen-resources/service-2.json +++ b/services/codepipeline/src/main/resources/codegen-resources/service-2.json @@ -459,7 +459,7 @@ "documentation":"

A system-generated random number that AWS CodePipeline uses to ensure that the job is being worked on by only one job worker. Get this number from the response of the PollForJobs request that returned this job.

" } }, - "documentation":"

Represents the input of an acknowledge job action.

" + "documentation":"

Represents the input of an AcknowledgeJob action.

" }, "AcknowledgeJobOutput":{ "type":"structure", @@ -469,7 +469,7 @@ "documentation":"

Whether the job worker has received the specified job.

" } }, - "documentation":"

Represents the output of an acknowledge job action.

" + "documentation":"

Represents the output of an AcknowledgeJob action.

" }, "AcknowledgeThirdPartyJobInput":{ "type":"structure", @@ -492,7 +492,7 @@ "documentation":"

The clientToken portion of the clientId and clientToken pair used to verify that the calling entity is allowed access to the job and its details.

" } }, - "documentation":"

Represents the input of an acknowledge third party job action.

" + "documentation":"

Represents the input of an AcknowledgeThirdPartyJob action.

" }, "AcknowledgeThirdPartyJobOutput":{ "type":"structure", @@ -502,7 +502,7 @@ "documentation":"

The status information for the third party job, if any.

" } }, - "documentation":"

Represents the output of an acknowledge third party job action.

" + "documentation":"

Represents the output of an AcknowledgeThirdPartyJob action.

" }, "ActionCategory":{ "type":"string", @@ -562,7 +562,7 @@ }, "queryable":{ "shape":"Boolean", - "documentation":"

Indicates that the proprety will be used in conjunction with PollForJobs. When creating a custom action, an action can have up to one queryable property. If it has one, that property must be both required and not secret.

If you create a pipeline with a custom action type, and that custom action contains a queryable property, the value for that configuration property is subject to additional restrictions. The value must be less than or equal to twenty (20) characters. The value can contain only alphanumeric characters, underscores, and hyphens.

" + "documentation":"

Indicates that the property will be used in conjunction with PollForJobs. When creating a custom action, an action can have up to one queryable property. If it has one, that property must be both required and not secret.

If you create a pipeline with a custom action type, and that custom action contains a queryable property, the value for that configuration property is subject to additional restrictions. The value must be less than or equal to twenty (20) characters. The value can contain only alphanumeric characters, underscores, and hyphens.

" }, "description":{ "shape":"Description", @@ -1078,7 +1078,11 @@ "type":"string", "pattern":"[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}" }, - "ClientToken":{"type":"string"}, + "ClientToken":{ + "type":"string", + "max":256, + "min":1 + }, "Code":{"type":"string"}, "ContinuationToken":{"type":"string"}, "CreateCustomActionTypeInput":{ @@ -1120,7 +1124,7 @@ "documentation":"

The details of the output artifact of the action, such as its commit ID.

" } }, - "documentation":"

Represents the input of a create custom action operation.

" + "documentation":"

Represents the input of a CreateCustomActionType operation.

" }, "CreateCustomActionTypeOutput":{ "type":"structure", @@ -1131,7 +1135,7 @@ "documentation":"

Returns information about the details of an action type.

" } }, - "documentation":"

Represents the output of a create custom action operation.

" + "documentation":"

Represents the output of a CreateCustomActionType operation.

" }, "CreatePipelineInput":{ "type":"structure", @@ -1142,7 +1146,7 @@ "documentation":"

Represents the structure of actions and stages to be performed in the pipeline.

" } }, - "documentation":"

Represents the input of a create pipeline action.

" + "documentation":"

Represents the input of a CreatePipeline action.

" }, "CreatePipelineOutput":{ "type":"structure", @@ -1152,7 +1156,7 @@ "documentation":"

Represents the structure of actions and stages to be performed in the pipeline.

" } }, - "documentation":"

Represents the output of a create pipeline action.

" + "documentation":"

Represents the output of a CreatePipeline action.

" }, "CurrentRevision":{ "type":"structure", @@ -1201,7 +1205,7 @@ "documentation":"

The version of the custom action to delete.

" } }, - "documentation":"

Represents the input of a delete custom action operation. The custom action will be marked as deleted.

" + "documentation":"

Represents the input of a DeleteCustomActionType operation. The custom action will be marked as deleted.

" }, "DeletePipelineInput":{ "type":"structure", @@ -1212,7 +1216,7 @@ "documentation":"

The name of the pipeline to be deleted.

" } }, - "documentation":"

Represents the input of a delete pipeline action.

" + "documentation":"

Represents the input of a DeletePipeline action.

" }, "Description":{ "type":"string", @@ -1245,7 +1249,7 @@ "documentation":"

The reason given to the user why a stage is disabled, such as waiting for manual approval or manual tests. This message is displayed in the pipeline console UI.

" } }, - "documentation":"

Represents the input of a disable stage transition input action.

" + "documentation":"

Represents the input of a DisableStageTransition action.

" }, "DisabledReason":{ "type":"string", @@ -1274,7 +1278,7 @@ "documentation":"

Specifies whether artifacts will be allowed to enter the stage and be processed by the actions in that stage (inbound) or whether already-processed artifacts will be allowed to transition to the next stage (outbound).

" } }, - "documentation":"

Represents the input of an enable stage transition action.

" + "documentation":"

Represents the input of an EnableStageTransition action.

" }, "Enabled":{"type":"boolean"}, "EncryptionKey":{ @@ -1384,7 +1388,7 @@ "documentation":"

The unique system-generated ID for the job.

" } }, - "documentation":"

Represents the input of a get job details action.

" + "documentation":"

Represents the input of a GetJobDetails action.

" }, "GetJobDetailsOutput":{ "type":"structure", @@ -1394,7 +1398,7 @@ "documentation":"

The details of the job.

If AWSSessionCredentials is used, a long-running job can call GetJobDetails again to obtain new credentials.

" } }, - "documentation":"

Represents the output of a get job details action.

" + "documentation":"

Represents the output of a GetJobDetails action.

" }, "GetPipelineExecutionInput":{ "type":"structure", @@ -1412,7 +1416,7 @@ "documentation":"

The ID of the pipeline execution about which you want to get execution details.

" } }, - "documentation":"

Represents the input of a get pipeline execution action.

" + "documentation":"

Represents the input of a GetPipelineExecution action.

" }, "GetPipelineExecutionOutput":{ "type":"structure", @@ -1422,7 +1426,7 @@ "documentation":"

Represents information about the execution of a pipeline.

" } }, - "documentation":"

Represents the output of a get pipeline execution action.

" + "documentation":"

Represents the output of a GetPipelineExecution action.

" }, "GetPipelineInput":{ "type":"structure", @@ -1437,7 +1441,7 @@ "documentation":"

The version number of the pipeline. If you do not specify a version, defaults to the most current version.

" } }, - "documentation":"

Represents the input of a get pipeline action.

" + "documentation":"

Represents the input of a GetPipeline action.

" }, "GetPipelineOutput":{ "type":"structure", @@ -1445,9 +1449,13 @@ "pipeline":{ "shape":"PipelineDeclaration", "documentation":"

Represents the structure of actions and stages to be performed in the pipeline.

" + }, + "metadata":{ + "shape":"PipelineMetadata", + "documentation":"

Represents the pipeline metadata information returned as part of the output of a GetPipeline action.

" } }, - "documentation":"

Represents the output of a get pipeline action.

" + "documentation":"

Represents the output of a GetPipeline action.

" }, "GetPipelineStateInput":{ "type":"structure", @@ -1458,7 +1466,7 @@ "documentation":"

The name of the pipeline about which you want to get information.

" } }, - "documentation":"

Represents the input of a get pipeline state action.

" + "documentation":"

Represents the input of a GetPipelineState action.

" }, "GetPipelineStateOutput":{ "type":"structure", @@ -1484,7 +1492,7 @@ "documentation":"

The date and time the pipeline was last updated, in timestamp format.

" } }, - "documentation":"

Represents the output of a get pipeline state action.

" + "documentation":"

Represents the output of a GetPipelineState action.

" }, "GetThirdPartyJobDetailsInput":{ "type":"structure", @@ -1502,7 +1510,7 @@ "documentation":"

The clientToken portion of the clientId and clientToken pair used to verify that the calling entity is allowed access to the job and its details.

" } }, - "documentation":"

Represents the input of a get third party job details action.

" + "documentation":"

Represents the input of a GetThirdPartyJobDetails action.

" }, "GetThirdPartyJobDetailsOutput":{ "type":"structure", @@ -1512,7 +1520,7 @@ "documentation":"

The details of the job, including any protected values defined for the job.

" } }, - "documentation":"

Represents the output of a get third party job details action.

" + "documentation":"

Represents the output of a GetThirdPartyJobDetails action.

" }, "InputArtifact":{ "type":"structure", @@ -1726,7 +1734,7 @@ "documentation":"

An identifier that was returned from the previous list action types call, which can be used to return the next set of action types in the list.

" } }, - "documentation":"

Represents the input of a list action types action.

" + "documentation":"

Represents the input of a ListActionTypes action.

" }, "ListActionTypesOutput":{ "type":"structure", @@ -1741,7 +1749,7 @@ "documentation":"

If the amount of returned information is significantly large, an identifier is also returned which can be used in a subsequent list action types call to return the next set of action types in the list.

" } }, - "documentation":"

Represents the output of a list action types action.

" + "documentation":"

Represents the output of a ListActionTypes action.

" }, "ListPipelineExecutionsInput":{ "type":"structure", @@ -1757,10 +1765,10 @@ }, "nextToken":{ "shape":"NextToken", - "documentation":"

The token that was returned from the previous list pipeline executions call, which can be used to return the next set of pipeline executions in the list.

" + "documentation":"

The token that was returned from the previous ListPipelineExecutions call, which can be used to return the next set of pipeline executions in the list.

" } }, - "documentation":"

Represents the input of a list pipeline executions action.

" + "documentation":"

Represents the input of a ListPipelineExecutions action.

" }, "ListPipelineExecutionsOutput":{ "type":"structure", @@ -1771,10 +1779,10 @@ }, "nextToken":{ "shape":"NextToken", - "documentation":"

A token that can be used in the next list pipeline executions call to return the next set of pipeline executions. To view all items in the list, continue to call this operation with each subsequent token until no more nextToken values are returned.

" + "documentation":"

A token that can be used in the next ListPipelineExecutions call. To view all items in the list, continue to call this operation with each subsequent token until no more nextToken values are returned.

" } }, - "documentation":"

Represents the output of a list pipeline executions action.

" + "documentation":"

Represents the output of a ListPipelineExecutions action.

" }, "ListPipelinesInput":{ "type":"structure", @@ -1784,7 +1792,7 @@ "documentation":"

An identifier that was returned from the previous list pipelines call, which can be used to return the next set of pipelines in the list.

" } }, - "documentation":"

Represents the input of a list pipelines action.

" + "documentation":"

Represents the input of a ListPipelines action.

" }, "ListPipelinesOutput":{ "type":"structure", @@ -1798,7 +1806,7 @@ "documentation":"

If the amount of returned information is significantly large, an identifier is also returned which can be used in a subsequent list pipelines call to return the next set of pipelines in the list.

" } }, - "documentation":"

Represents the output of a list pipelines action.

" + "documentation":"

Represents the output of a ListPipelines action.

" }, "MaxBatchSize":{ "type":"integer", @@ -1820,7 +1828,11 @@ "max":5, "min":0 }, - "NextToken":{"type":"string"}, + "NextToken":{ + "type":"string", + "max":2048, + "min":1 + }, "Nonce":{"type":"string"}, "NotLatestPipelineExecutionException":{ "type":"structure", @@ -1849,6 +1861,10 @@ "max":100, "min":0 }, + "PipelineArn":{ + "type":"string", + "pattern":"arn:aws(-[\\w]+)*:codepipeline:.+:[0-9]{12}:.+" + }, "PipelineContext":{ "type":"structure", "members":{ @@ -1862,7 +1878,7 @@ }, "action":{ "shape":"ActionContext", - "documentation":"

" + "documentation":"

The context of an action to a job worker within the stage of a pipeline.

" } }, "documentation":"

Represents information about a pipeline to a job worker.

" @@ -1886,7 +1902,7 @@ }, "artifactStore":{ "shape":"ArtifactStore", - "documentation":"

Represents the context of an action within the stage of a pipeline to a job worker.

" + "documentation":"

Represents information about the Amazon S3 bucket where artifacts are stored for the pipeline.

" }, "stages":{ "shape":"PipelineStageDeclarationList", @@ -1916,7 +1932,7 @@ }, "status":{ "shape":"PipelineExecutionStatus", - "documentation":"

The status of the pipeline execution.

" + "documentation":"

The status of the pipeline execution.

" }, "artifactRevisions":{ "shape":"ArtifactRevisionList", @@ -1954,7 +1970,7 @@ }, "status":{ "shape":"PipelineExecutionStatus", - "documentation":"

The status of the pipeline execution.

" + "documentation":"

The status of the pipeline execution.

" }, "startTime":{ "shape":"Timestamp", @@ -1975,6 +1991,24 @@ "type":"list", "member":{"shape":"PipelineSummary"} }, + "PipelineMetadata":{ + "type":"structure", + "members":{ + "pipelineArn":{ + "shape":"PipelineArn", + "documentation":"

The Amazon Resource Name (ARN) of the pipeline.

" + }, + "created":{ + "shape":"Timestamp", + "documentation":"

The date and time the pipeline was created, in timestamp format.

" + }, + "updated":{ + "shape":"Timestamp", + "documentation":"

The date and time the pipeline was last updated, in timestamp format.

" + } + }, + "documentation":"

Information about a pipeline.

" + }, "PipelineName":{ "type":"string", "max":100, @@ -2049,7 +2083,7 @@ "documentation":"

A map of property names and values. For an action type with no queryable properties, this value must be null or an empty map. For an action type with a queryable property, you must supply that property as a key in the map. Only jobs whose action configuration matches the mapped value will be returned.

" } }, - "documentation":"

Represents the input of a poll for jobs action.

" + "documentation":"

Represents the input of a PollForJobs action.

" }, "PollForJobsOutput":{ "type":"structure", @@ -2059,7 +2093,7 @@ "documentation":"

Information about the jobs to take action on.

" } }, - "documentation":"

Represents the output of a poll for jobs action.

" + "documentation":"

Represents the output of a PollForJobs action.

" }, "PollForThirdPartyJobsInput":{ "type":"structure", @@ -2074,7 +2108,7 @@ "documentation":"

The maximum number of jobs to return in a poll for jobs call.

" } }, - "documentation":"

Represents the input of a poll for third party jobs action.

" + "documentation":"

Represents the input of a PollForThirdPartyJobs action.

" }, "PollForThirdPartyJobsOutput":{ "type":"structure", @@ -2084,7 +2118,7 @@ "documentation":"

Information about the jobs to take action on.

" } }, - "documentation":"

Represents the output of a poll for third party jobs action.

" + "documentation":"

Represents the output of a PollForThirdPartyJobs action.

" }, "PutActionRevisionInput":{ "type":"structure", @@ -2112,7 +2146,7 @@ "documentation":"

Represents information about the version (or revision) of an action.

" } }, - "documentation":"

Represents the input of a put action revision action.

" + "documentation":"

Represents the input of a PutActionRevision action.

" }, "PutActionRevisionOutput":{ "type":"structure", @@ -2126,7 +2160,7 @@ "documentation":"

The ID of the current workflow state of the pipeline.

" } }, - "documentation":"

Represents the output of a put action revision action.

" + "documentation":"

Represents the output of a PutActionRevision action.

" }, "PutApprovalResultInput":{ "type":"structure", @@ -2159,7 +2193,7 @@ "documentation":"

The system-generated token used to identify a unique approval request. The token for each open approval request can be obtained using the GetPipelineState action and is used to validate that the approval request corresponding to this token is still valid.

" } }, - "documentation":"

Represents the input of a put approval result action.

" + "documentation":"

Represents the input of a PutApprovalResult action.

" }, "PutApprovalResultOutput":{ "type":"structure", @@ -2169,7 +2203,7 @@ "documentation":"

The timestamp showing when the approval or rejection was submitted.

" } }, - "documentation":"

Represents the output of a put approval result action.

" + "documentation":"

Represents the output of a PutApprovalResult action.

" }, "PutJobFailureResultInput":{ "type":"structure", @@ -2187,7 +2221,7 @@ "documentation":"

The details about the failure of a job.

" } }, - "documentation":"

Represents the input of a put job failure result action.

" + "documentation":"

Represents the input of a PutJobFailureResult action.

" }, "PutJobSuccessResultInput":{ "type":"structure", @@ -2210,7 +2244,7 @@ "documentation":"

The execution details of the successful job, such as the actions taken by the job worker.

" } }, - "documentation":"

Represents the input of a put job success result action.

" + "documentation":"

Represents the input of a PutJobSuccessResult action.

" }, "PutThirdPartyJobFailureResultInput":{ "type":"structure", @@ -2233,7 +2267,7 @@ "documentation":"

Represents information about failure details.

" } }, - "documentation":"

Represents the input of a third party job failure result action.

" + "documentation":"

Represents the input of a PutThirdPartyJobFailureResult action.

" }, "PutThirdPartyJobSuccessResultInput":{ "type":"structure", @@ -2263,7 +2297,7 @@ "documentation":"

The details of the actions taken and results produced on an artifact as it passes through stages in the pipeline.

" } }, - "documentation":"

Represents the input of a put third party job success result action.

" + "documentation":"

Represents the input of a PutThirdPartyJobSuccessResult action.

" }, "QueryParamMap":{ "type":"map", @@ -2298,7 +2332,7 @@ "documentation":"

The scope of the retry attempt. Currently, the only supported value is FAILED_ACTIONS.

" } }, - "documentation":"

Represents the input of a retry stage execution action.

" + "documentation":"

Represents the input of a RetryStageExecution action.

" }, "RetryStageExecutionOutput":{ "type":"structure", @@ -2308,7 +2342,7 @@ "documentation":"

The ID of the current workflow execution in the failed stage.

" } }, - "documentation":"

Represents the output of a retry stage execution action.

" + "documentation":"

Represents the output of a RetryStageExecution action.

" }, "Revision":{ "type":"string", @@ -2484,7 +2518,7 @@ "documentation":"

The name of the pipeline to start.

" } }, - "documentation":"

Represents the input of a start pipeline execution action.

" + "documentation":"

Represents the input of a StartPipelineExecution action.

" }, "StartPipelineExecutionOutput":{ "type":"structure", @@ -2494,7 +2528,7 @@ "documentation":"

The unique system-generated ID of the pipeline execution that was started.

" } }, - "documentation":"

Represents the output of a start pipeline execution action.

" + "documentation":"

Represents the output of a StartPipelineExecution action.

" }, "ThirdPartyJob":{ "type":"structure", @@ -2608,7 +2642,7 @@ "documentation":"

The name of the pipeline to be updated.

" } }, - "documentation":"

Represents the input of an update pipeline action.

" + "documentation":"

Represents the input of an UpdatePipeline action.

" }, "UpdatePipelineOutput":{ "type":"structure", @@ -2618,7 +2652,7 @@ "documentation":"

The structure of the updated pipeline.

" } }, - "documentation":"

Represents the output of an update pipeline action.

" + "documentation":"

Represents the output of an UpdatePipeline action.

" }, "Url":{ "type":"string", @@ -2644,5 +2678,5 @@ "pattern":"[0-9A-Za-z_-]+" } }, - "documentation":"AWS CodePipeline

Overview

This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline is only configurable through the API. For additional information, see the AWS CodePipeline User Guide.

You can use the AWS CodePipeline API to work with pipelines, stages, actions, gates, and transitions, as described below.

Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of actions, gates, and stages.

You can work with pipelines by calling:

Pipelines include stages, which are logical groupings of gates and actions. Each stage contains one or more actions that must complete before the next stage begins. A stage will result in success or failure. If a stage fails, then the pipeline stops at that stage and will remain stopped until either a new version of an artifact appears in the source location, or a user takes action to re-run the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, also refer to the AWS CodePipeline Pipeline Structure Reference.

Pipeline stages include actions, which are categorized into categories such as source or build actions performed within a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState.

Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete.

You can work with transitions by calling:

Using the API to integrate with AWS CodePipeline

For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. In order to integrate with AWS CodePipeline, developers will need to work with the following items:

Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source.

You can work with jobs by calling:

Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network.

You can work with third party jobs by calling:

" + "documentation":"AWS CodePipeline

Overview

This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline is only configurable through the API. For additional information, see the AWS CodePipeline User Guide.

You can use the AWS CodePipeline API to work with pipelines, stages, actions, gates, and transitions, as described below.

Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of actions, gates, and stages.

You can work with pipelines by calling:

Pipelines include stages, which are logical groupings of gates and actions. Each stage contains one or more actions that must complete before the next stage begins. A stage will result in success or failure. If a stage fails, then the pipeline stops at that stage and will remain stopped until either a new version of an artifact appears in the source location, or a user takes action to re-run the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, also refer to the AWS CodePipeline Pipeline Structure Reference.

Pipeline stages include actions, which are categorized into categories such as source or build actions performed within a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState.

Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete.

You can work with transitions by calling:

Using the API to integrate with AWS CodePipeline

For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. In order to integrate with AWS CodePipeline, developers will need to work with the following items:

Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source.

You can work with jobs by calling:

Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network.

You can work with third party jobs by calling:

" } diff --git a/services/codestar/src/main/resources/codegen-resources/service-2.json b/services/codestar/src/main/resources/codegen-resources/service-2.json index 92f54c6c488d..6557ee76ec7a 100755 --- a/services/codestar/src/main/resources/codegen-resources/service-2.json +++ b/services/codestar/src/main/resources/codegen-resources/service-2.json @@ -168,6 +168,21 @@ ], "documentation":"

Lists resources associated with a project in AWS CodeStar.

" }, + "ListTagsForProject":{ + "name":"ListTagsForProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForProjectRequest"}, + "output":{"shape":"ListTagsForProjectResult"}, + "errors":[ + {"shape":"ProjectNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"InvalidNextTokenException"} + ], + "documentation":"

Gets the tags for a project.

" + }, "ListTeamMembers":{ "name":"ListTeamMembers", "http":{ @@ -197,6 +212,38 @@ ], "documentation":"

Lists all the user profiles configured for your AWS account in AWS CodeStar.

" }, + "TagProject":{ + "name":"TagProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagProjectRequest"}, + "output":{"shape":"TagProjectResult"}, + "errors":[ + {"shape":"ProjectNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Adds tags to a project.

" + }, + "UntagProject":{ + "name":"UntagProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagProjectRequest"}, + "output":{"shape":"UntagProjectResult"}, + "errors":[ + {"shape":"ProjectNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"LimitExceededException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Removes tags from a project.

" + }, "UpdateProject":{ "name":"UpdateProject", "http":{ @@ -260,11 +307,11 @@ }, "clientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

A user- or system-generated token that identifies the entity that requested the team member association to the project. This token can be used to repeat the request.

" + "documentation":"

A user- or system-generated token that identifies the entity that requested the team member association to the project. This token can be used to repeat the request.

" }, "userArn":{ "shape":"UserArn", - "documentation":"

The Amazon Resource Name (ARN) for the IAM user you want to add to the DevHub project.

" + "documentation":"

The Amazon Resource Name (ARN) for the IAM user you want to add to the AWS CodeStar project.

" }, "projectRole":{ "shape":"Role", @@ -282,7 +329,7 @@ "members":{ "clientRequestToken":{ "shape":"ClientRequestToken", - "documentation":"

The user- or system-generated token from the initial request that can be used to repeat the request.

" + "documentation":"

The user- or system-generated token from the initial request that can be used to repeat the request.

" } } }, @@ -575,7 +622,8 @@ "type":"string", "max":128, "min":3, - "pattern":"^[\\w-.+]+@[\\w-.+]+$" + "pattern":"^[\\w-.+]+@[\\w-.+]+$", + "sensitive":true }, "InvalidNextTokenException":{ "type":"structure", @@ -641,7 +689,7 @@ }, "maxResults":{ "shape":"MaxResults", - "documentation":"

he maximum amount of data that can be contained in a single set of results.

", + "documentation":"

The maximum amount of data that can be contained in a single set of results.

", "box":true } } @@ -659,6 +707,38 @@ } } }, + "ListTagsForProjectRequest":{ + "type":"structure", + "required":["id"], + "members":{ + "id":{ + "shape":"ProjectId", + "documentation":"

The ID of the project to get tags for.

" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"

Reserved for future use.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

Reserved for future use.

", + "box":true + } + } + }, + "ListTagsForProjectResult":{ + "type":"structure", + "members":{ + "tags":{ + "shape":"Tags", + "documentation":"

The tags for the project.

" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"

Reserved for future use.

" + } + } + }, "ListTeamMembersRequest":{ "type":"structure", "required":["projectId"], @@ -727,7 +807,7 @@ }, "PaginationToken":{ "type":"string", - "max":256, + "max":512, "min":1, "pattern":"^[\\w/+=]+$" }, @@ -839,6 +919,52 @@ "type":"string", "pattern":"^arn:aws[^:\\s]*:cloudformation:[^:\\s]+:[0-9]{12}:stack\\/[^:\\s]+\\/[^:\\s]+$" }, + "TagKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "TagKeys":{ + "type":"list", + "member":{"shape":"TagKey"} + }, + "TagProjectRequest":{ + "type":"structure", + "required":[ + "id", + "tags" + ], + "members":{ + "id":{ + "shape":"ProjectId", + "documentation":"

The ID of the project you want to add a tag to.

" + }, + "tags":{ + "shape":"Tags", + "documentation":"

The tags you want to add to the project.

" + } + } + }, + "TagProjectResult":{ + "type":"structure", + "members":{ + "tags":{ + "shape":"Tags", + "documentation":"

The tags for the project.

" + } + } + }, + "TagValue":{ + "type":"string", + "max":256, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "Tags":{ + "type":"map", + "key":{"shape":"TagKey"}, + "value":{"shape":"TagValue"} + }, "TeamMember":{ "type":"structure", "required":[ @@ -852,7 +978,7 @@ }, "projectRole":{ "shape":"Role", - "documentation":"

The role assigned to the user in the project. Project roles have different levels of access. For more information, see Working with Teams in the AWS CodeStar User Guide.

" + "documentation":"

The role assigned to the user in the project. Project roles have different levels of access. For more information, see Working with Teams in the AWS CodeStar User Guide.

" }, "remoteAccessAllowed":{ "shape":"RemoteAccessAllowed", @@ -880,6 +1006,28 @@ "type":"list", "member":{"shape":"TeamMember"} }, + "UntagProjectRequest":{ + "type":"structure", + "required":[ + "id", + "tags" + ], + "members":{ + "id":{ + "shape":"ProjectId", + "documentation":"

The ID of the project to remove tags from.

" + }, + "tags":{ + "shape":"TagKeys", + "documentation":"

The tags to remove from the project.

" + } + } + }, + "UntagProjectResult":{ + "type":"structure", + "members":{ + } + }, "UpdateProjectRequest":{ "type":"structure", "required":["id"], @@ -920,7 +1068,7 @@ }, "projectRole":{ "shape":"Role", - "documentation":"

The role assigned to the user in the project. Project roles have different levels of access. For more information, see Working with Teams in the AWS CodeStar User Guide.

" + "documentation":"

The role assigned to the user in the project. Project roles have different levels of access. For more information, see Working with Teams in the AWS CodeStar User Guide.

" }, "remoteAccessAllowed":{ "shape":"RemoteAccessAllowed", @@ -1003,7 +1151,7 @@ "type":"string", "max":95, "min":32, - "pattern":"arn:aws:iam::\\d{12}:user\\/[\\w-]+" + "pattern":"^arn:aws:iam::\\d{12}:user(?:(\\u002F)|(\\u002F[\\u0021-\\u007E]+\\u002F))[\\w+=,.@-]+$" }, "UserProfileAlreadyExistsException":{ "type":"structure", @@ -1059,5 +1207,5 @@ "exception":true } }, - "documentation":"AWS CodeStar

This is the API reference for AWS CodeStar. This reference provides descriptions of the operations and data types for the AWS CodeStar API along with usage examples.

You can use the AWS CodeStar API to work with:

Projects and their resources, by calling the following:

Teams and team members, by calling the following:

Users, by calling the following:

" + "documentation":"AWS CodeStar

This is the API reference for AWS CodeStar. This reference provides descriptions of the operations and data types for the AWS CodeStar API along with usage examples.

You can use the AWS CodeStar API to work with:

Projects and their resources, by calling the following:

Teams and team members, by calling the following:

Users, by calling the following:

" } diff --git a/services/cognitoidp/src/main/resources/codegen-resources/service-2.json b/services/cognitoidp/src/main/resources/codegen-resources/service-2.json index 057e253e1561..37e9a0697da2 100644 --- a/services/cognitoidp/src/main/resources/codegen-resources/service-2.json +++ b/services/cognitoidp/src/main/resources/codegen-resources/service-2.json @@ -132,6 +132,25 @@ ], "documentation":"

Deletes the user attributes in a user pool as an administrator. Works on any user.

Requires developer credentials.

" }, + "AdminDisableProviderForUser":{ + "name":"AdminDisableProviderForUser", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AdminDisableProviderForUserRequest"}, + "output":{"shape":"AdminDisableProviderForUserResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParameterException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"UserNotFoundException"}, + {"shape":"AliasExistsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Disables the user from signing in with the specified external (SAML or social) identity provider. If the user to disable is a Cognito User Pools native username + password user, they are not permitted to use their password to sign-in. If the user to disable is a linked external IdP user, any link between that user and an existing user is removed. The next time the external user (no longer attached to the previously linked DestinationUser) signs in, they must create a new user account. See AdminLinkProviderForUser.

This action is enabled only for admin access and requires developer credentials.

The ProviderName must match the value specified when creating an IdP for the pool.

To disable a native username + password user, the ProviderName value must be Cognito and the ProviderAttributeName must be Cognito_Subject, with the ProviderAttributeValue being the name that is used in the user pool for the user.

The ProviderAttributeName must always be Cognito_Subject for social identity providers. The ProviderAttributeValue must always be the exact subject that was used when the user was originally linked as a source user.

For de-linking a SAML identity, there are two scenarios. If the linked identity has not yet been used to sign-in, the ProviderAttributeName and ProviderAttributeValue must be the same values that were used for the SourceUser when the identities were originally linked in the AdminLinkProviderForUser call. (If the linking was done with ProviderAttributeName set to Cognito_Subject, the same applies here). However, if the user has already signed in, the ProviderAttributeName must be Cognito_Subject and ProviderAttributeValue must be the subject of the SAML assertion.

" + }, "AdminDisableUser":{ "name":"AdminDisableUser", "http":{ @@ -249,6 +268,25 @@ ], "documentation":"

Initiates the authentication flow, as an administrator.

Requires developer credentials.

" }, + "AdminLinkProviderForUser":{ + "name":"AdminLinkProviderForUser", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AdminLinkProviderForUserRequest"}, + "output":{"shape":"AdminLinkProviderForUserResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParameterException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"UserNotFoundException"}, + {"shape":"AliasExistsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Links an existing user account in a user pool (DestinationUser) to an identity from an external identity provider (SourceUser) based on a specified attribute name and value from the external identity provider. This allows you to create a link from the existing user account to an external federated user identity that has not yet been used to sign in, so that the federated user identity can be used to sign in as the existing user account.

For example, if there is an existing user with a username and password, this API links that user to a federated user identity, so that when the federated user identity is used, the user signs in as the existing user account.

Because this API allows a user with an external federated identity to sign in as an existing user in the user pool, it is critical that it only be used with external identity providers and provider attributes that have been trusted by the application owner.

See also AdminDisableProviderForUser.

This action is enabled only for admin access and requires developer credentials.

" + }, "AdminListDevices":{ "name":"AdminListDevices", "http":{ @@ -320,6 +358,9 @@ {"shape":"TooManyRequestsException"}, {"shape":"LimitExceededException"}, {"shape":"UserNotFoundException"}, + {"shape":"InvalidSmsRoleAccessPolicyException"}, + {"shape":"InvalidEmailRoleAccessPolicyException"}, + {"shape":"InvalidSmsRoleTrustRelationshipException"}, {"shape":"InternalErrorException"} ], "documentation":"

Resets the specified user's password in a user pool as an administrator. Works on any user.

When a developer calls this API, the current password is invalidated, so it must be changed. If a user tries to sign in after the API is called, the app will get a PasswordResetRequiredException exception back and should direct the user down the flow to reset the password, which is the same as the forgot password flow. In addition, if the user pool has phone verification selected and a verified phone number exists for the user, or if email verification is selected and a verified email exists for the user, calling this API will also result in sending a message to the end user with the code to change their password.

Requires developer credentials.

" @@ -571,6 +612,24 @@ ], "documentation":"

Creates an identity provider for a user pool.

" }, + "CreateResourceServer":{ + "name":"CreateResourceServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateResourceServerRequest"}, + "output":{"shape":"CreateResourceServerResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Creates a new OAuth2.0 resource server and defines custom scopes in it.

" + }, "CreateUserImportJob":{ "name":"CreateUserImportJob", "http":{ @@ -680,6 +739,22 @@ ], "documentation":"

Deletes an identity provider for a user pool.

" }, + "DeleteResourceServer":{ + "name":"DeleteResourceServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteResourceServerRequest"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Deletes a resource server.

" + }, "DeleteUser":{ "name":"DeleteUser", "http":{ @@ -697,7 +772,7 @@ {"shape":"UserNotConfirmedException"}, {"shape":"InternalErrorException"} ], - "documentation":"

Allows a user to delete one's self.

", + "documentation":"

Allows a user to delete himself or herself.

", "authtype":"none" }, "DeleteUserAttributes":{ @@ -787,6 +862,23 @@ ], "documentation":"

Gets information about a specific identity provider.

" }, + "DescribeResourceServer":{ + "name":"DescribeResourceServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeResourceServerRequest"}, + "output":{"shape":"DescribeResourceServerResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Describes a resource server.

" + }, "DescribeUserImportJob":{ "name":"DescribeUserImportJob", "http":{ @@ -975,6 +1067,23 @@ ], "documentation":"

Gets the specified identity provider.

" }, + "GetUICustomization":{ + "name":"GetUICustomization", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetUICustomizationRequest"}, + "output":{"shape":"GetUICustomizationResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Gets the UI Customization information for a particular app client's app UI, if there is something set. If nothing is set for the particular client, but there is an existing pool level customization (app clientId will be ALL), then that is returned. If nothing is present, then an empty shape is returned.

" + }, "GetUser":{ "name":"GetUser", "http":{ @@ -1123,6 +1232,23 @@ ], "documentation":"

Lists information about all identity providers for a user pool.

" }, + "ListResourceServers":{ + "name":"ListResourceServers", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListResourceServersRequest"}, + "output":{"shape":"ListResourceServersResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Lists the resource servers for a user pool.

" + }, "ListUserImportJobs":{ "name":"ListUserImportJobs", "http":{ @@ -1265,6 +1391,23 @@ ], "documentation":"

Responds to the authentication challenge.

" }, + "SetUICustomization":{ + "name":"SetUICustomization", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SetUICustomizationRequest"}, + "output":{"shape":"SetUICustomizationResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Sets the UI customization information for a user pool's built-in app UI.

You can specify app UI customization settings for a single client (with a specific clientId) or for all clients (by setting the clientId to ALL). If you specify ALL, the default configuration will be used for every client that has no UI customization set previously. If you specify UI customization settings for a particular client, it will no longer fall back to the ALL configuration.

To use this API, your user pool must have a domain associated with it. Otherwise, there is no place to host the app's pages, and the service will throw an error.

" + }, "SetUserSettings":{ "name":"SetUserSettings", "http":{ @@ -1404,6 +1547,23 @@ ], "documentation":"

Updates identity provider information for a user pool.

" }, + "UpdateResourceServer":{ + "name":"UpdateResourceServer", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateResourceServerRequest"}, + "output":{"shape":"UpdateResourceServerResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"NotAuthorizedException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalErrorException"} + ], + "documentation":"

Updates the name and scopes of resource server. All other fields are read-only.

" + }, "UpdateUserAttributes":{ "name":"UpdateUserAttributes", "http":{ @@ -1639,7 +1799,7 @@ "members":{ "User":{ "shape":"UserType", - "documentation":"

The user returned in the request to create a new user.

" + "documentation":"

The newly created user.

" } }, "documentation":"

Represents the response from the server to the request to create the user.

" @@ -1696,6 +1856,28 @@ }, "documentation":"

Represents the request to delete a user as an administrator.

" }, + "AdminDisableProviderForUserRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "User" + ], + "members":{ + "UserPoolId":{ + "shape":"StringType", + "documentation":"

The user pool ID for the user pool.

" + }, + "User":{ + "shape":"ProviderUserIdentifierType", + "documentation":"

The user to be disabled.

" + } + } + }, + "AdminDisableProviderForUserResponse":{ + "type":"structure", + "members":{ + } + }, "AdminDisableUserRequest":{ "type":"structure", "required":[ @@ -1872,11 +2054,11 @@ }, "AuthFlow":{ "shape":"AuthFlowType", - "documentation":"

The authentication flow for this call to execute. The API action will depend on this value. For example:

Valid values include:

" + "documentation":"

The authentication flow for this call to execute. The API action will depend on this value. For example:

Valid values include:

" }, "AuthParameters":{ "shape":"AuthParametersType", - "documentation":"

The authentication parameters. These are inputs corresponding to the AuthFlow that you are invoking. The required values depend on the value of AuthFlow:

" + "documentation":"

The authentication parameters. These are inputs corresponding to the AuthFlow that you are invoking. The required values depend on the value of AuthFlow:

" }, "ClientMetadata":{ "shape":"ClientMetadataType", @@ -1907,6 +2089,33 @@ }, "documentation":"

Initiates the authentication response, as an administrator.

" }, + "AdminLinkProviderForUserRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "DestinationUser", + "SourceUser" + ], + "members":{ + "UserPoolId":{ + "shape":"StringType", + "documentation":"

The user pool ID for the user pool.

" + }, + "DestinationUser":{ + "shape":"ProviderUserIdentifierType", + "documentation":"

The existing user in the user pool to be linked to the external identity provider user account. Can be a native (Username + Password) Cognito User Pools user or a federated user (for example, a SAML or Facebook user). If the user doesn't exist, an exception is thrown. This is the user that is returned when the new user (with the linked identity provider attribute) signs in.

The ProviderAttributeValue for the DestinationUser must match the username for the user in the user pool. The ProviderAttributeName will always be ignored.

" + }, + "SourceUser":{ + "shape":"ProviderUserIdentifierType", + "documentation":"

An external identity provider account for a user who does not currently exist yet in the user pool. This user must be a federated user (for example, a SAML or Facebook user), not another native user.

If the SourceUser is a federated social identity provider user (Facebook, Google, or Login with Amazon), you must set the ProviderAttributeName to Cognito_Subject. For social identity providers, the ProviderName will be Facebook, Google, or LoginWithAmazon, and Cognito will automatically parse the Facebook, Google, and Login with Amazon tokens for id, sub, and user_id, respectively. The ProviderAttributeValue for the user must be the same value as the id, sub, or user_id value found in the social identity provider token.

For SAML, the ProviderAttributeName can be any value that matches a claim in the SAML assertion. If you wish to link SAML users based on the subject of the SAML assertion, you should map the subject to a claim through the SAML identity provider and submit that claim name as the ProviderAttributeName. If you set ProviderAttributeName to Cognito_Subject, Cognito will automatically parse the default unique identifier found in the subject from the SAML token.

" + } + } + }, + "AdminLinkProviderForUserResponse":{ + "type":"structure", + "members":{ + } + }, "AdminListDevicesRequest":{ "type":"structure", "required":[ @@ -2241,9 +2450,14 @@ "type":"list", "member":{"shape":"AttributeType"} }, + "AttributeMappingKeyType":{ + "type":"string", + "max":32, + "min":1 + }, "AttributeMappingType":{ "type":"map", - "key":{"shape":"CustomAttributeNameType"}, + "key":{"shape":"AttributeMappingKeyType"}, "value":{"shape":"StringType"} }, "AttributeNameListType":{ @@ -2322,6 +2536,8 @@ "documentation":"

The result type of the authentication result.

" }, "BooleanType":{"type":"boolean"}, + "CSSType":{"type":"string"}, + "CSSVersionType":{"type":"string"}, "CallbackURLsListType":{ "type":"list", "member":{"shape":"RedirectUrlType"}, @@ -2521,7 +2737,7 @@ "members":{ "ClientId":{ "shape":"ClientIdType", - "documentation":"

The ID of the client associated with the user pool.

" + "documentation":"

The app client ID of the app associated with the user pool.

" }, "SecretHash":{ "shape":"SecretHashType", @@ -2558,7 +2774,7 @@ "members":{ "ClientId":{ "shape":"ClientIdType", - "documentation":"

The ID of the client associated with the user pool.

" + "documentation":"

The ID of the app client associated with the user pool.

" }, "SecretHash":{ "shape":"SecretHashType", @@ -2643,7 +2859,7 @@ "documentation":"

The user pool ID.

" }, "ProviderName":{ - "shape":"ProviderNameType", + "shape":"ProviderNameTypeV1", "documentation":"

The identity provider name.

" }, "ProviderType":{ @@ -2674,6 +2890,42 @@ } } }, + "CreateResourceServerRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "Identifier", + "Name" + ], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "Identifier":{ + "shape":"ResourceServerIdentifierType", + "documentation":"

A unique resource server identifier for the resource server. This could be an HTTPS endpoint where the resource server is located. For example, https://my-weather-api.example.com.

" + }, + "Name":{ + "shape":"ResourceServerNameType", + "documentation":"

A friendly name for the resource server.

" + }, + "Scopes":{ + "shape":"ResourceServerScopeListType", + "documentation":"

A list of scopes. Each scope is map, where the keys are name and description.

" + } + } + }, + "CreateResourceServerResponse":{ + "type":"structure", + "required":["ResourceServer"], + "members":{ + "ResourceServer":{ + "shape":"ResourceServerType", + "documentation":"

The newly created resource server.

" + } + } + }, "CreateUserImportJobRequest":{ "type":"structure", "required":[ @@ -2829,6 +3081,10 @@ "shape":"AliasAttributesListType", "documentation":"

Attributes supported as an alias for this user pool. Possible values: phone_number, email, or preferred_username.

" }, + "UsernameAttributes":{ + "shape":"UsernameAttributesListType", + "documentation":"

Specifies whether email addresses or phone numbers can be specified as usernames when a user signs up.

" + }, "SmsVerificationMessage":{ "shape":"SmsVerificationMessageType", "documentation":"

A string representing the SMS verification message.

" @@ -2841,6 +3097,10 @@ "shape":"EmailVerificationSubjectType", "documentation":"

A string representing the email verification subject.

" }, + "VerificationMessageTemplate":{ + "shape":"VerificationMessageTemplateType", + "documentation":"

The template for the verification message that the user sees when the app requests permission to access the user's information.

" + }, "SmsAuthenticationMessage":{ "shape":"SmsVerificationMessageType", "documentation":"

A string representing the SMS authentication message.

" @@ -2899,6 +3159,13 @@ "min":1 }, "DateType":{"type":"timestamp"}, + "DefaultEmailOptionType":{ + "type":"string", + "enum":[ + "CONFIRM_WITH_LINK", + "CONFIRM_WITH_CODE" + ] + }, "DeleteGroupRequest":{ "type":"structure", "required":[ @@ -2933,6 +3200,23 @@ } } }, + "DeleteResourceServerRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "Identifier" + ], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool that hosts the resource server.

" + }, + "Identifier":{ + "shape":"ResourceServerIdentifierType", + "documentation":"

The identifier for the resource server.

" + } + } + }, "DeleteUserAttributesRequest":{ "type":"structure", "required":[ @@ -2970,7 +3254,7 @@ }, "ClientId":{ "shape":"ClientIdType", - "documentation":"

The ID of the client associated with the user pool.

" + "documentation":"

The app client ID of the app associated with the user pool.

" } }, "documentation":"

Represents the request to delete a user pool client.

" @@ -3057,6 +3341,33 @@ } } }, + "DescribeResourceServerRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "Identifier" + ], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool that hosts the resource server.

" + }, + "Identifier":{ + "shape":"ResourceServerIdentifierType", + "documentation":"

The identifier for the resource server

" + } + } + }, + "DescribeResourceServerResponse":{ + "type":"structure", + "required":["ResourceServer"], + "members":{ + "ResourceServer":{ + "shape":"ResourceServerType", + "documentation":"

The resource server.

" + } + } + }, "DescribeUserImportJobRequest":{ "type":"structure", "required":[ @@ -3098,7 +3409,7 @@ }, "ClientId":{ "shape":"ClientIdType", - "documentation":"

The ID of the client associated with the user pool.

" + "documentation":"

The app client ID of the app associated with the user pool.

" } }, "documentation":"

Represents the request to describe a user pool client.

" @@ -3273,13 +3584,15 @@ "CREATING", "DELETING", "UPDATING", - "ACTIVE" + "ACTIVE", + "FAILED" ] }, "DomainType":{ "type":"string", - "max":1024, - "min":1 + "max":63, + "min":1, + "pattern":"^[a-z0-9](?:[a-z0-9\\-]{0,61}[a-z0-9])?$" }, "DomainVersionType":{ "type":"string", @@ -3312,12 +3625,24 @@ }, "documentation":"

The email configuration type.

" }, + "EmailVerificationMessageByLinkType":{ + "type":"string", + "max":20000, + "min":6, + "pattern":"[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*\\{##[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*##\\}[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*" + }, "EmailVerificationMessageType":{ "type":"string", "max":20000, "min":6, "pattern":"[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*\\{####\\}[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s*]*" }, + "EmailVerificationSubjectByLinkType":{ + "type":"string", + "max":140, + "min":1, + "pattern":"[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}\\s]+" + }, "EmailVerificationSubjectType":{ "type":"string", "max":140, @@ -3499,6 +3824,30 @@ } } }, + "GetUICustomizationRequest":{ + "type":"structure", + "required":["UserPoolId"], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "ClientId":{ + "shape":"ClientIdType", + "documentation":"

The client ID for the client app.

" + } + } + }, + "GetUICustomizationResponse":{ + "type":"structure", + "required":["UICustomization"], + "members":{ + "UICustomization":{ + "shape":"UICustomizationType", + "documentation":"

The UI customization information.

" + } + } + }, "GetUserAttributeVerificationCodeRequest":{ "type":"structure", "required":[ @@ -3669,7 +4018,12 @@ }, "IdentityProviderTypeType":{ "type":"string", - "enum":["SAML"] + "enum":[ + "SAML", + "Facebook", + "Google", + "LoginWithAmazon" + ] }, "IdpIdentifierType":{ "type":"string", @@ -3683,6 +4037,8 @@ "max":50, "min":0 }, + "ImageFileType":{"type":"blob"}, + "ImageUrlType":{"type":"string"}, "InitiateAuthRequest":{ "type":"structure", "required":[ @@ -3692,11 +4048,11 @@ "members":{ "AuthFlow":{ "shape":"AuthFlowType", - "documentation":"

The authentication flow for this call to execute. The API action will depend on this value. For example:

Valid values include:

ADMIN_NO_SRP_AUTH is not a valid value.

" + "documentation":"

The authentication flow for this call to execute. The API action will depend on this value. For example:

Valid values include:

ADMIN_NO_SRP_AUTH is not a valid value.

" }, "AuthParameters":{ "shape":"AuthParametersType", - "documentation":"

The authentication parameters. These are inputs corresponding to the AuthFlow that you are invoking. The required values depend on the value of AuthFlow:

" + "documentation":"

The authentication parameters. These are inputs corresponding to the AuthFlow that you are invoking. The required values depend on the value of AuthFlow:

" }, "ClientMetadata":{ "shape":"ClientMetadataType", @@ -3983,6 +4339,43 @@ "max":60, "min":1 }, + "ListResourceServersLimitType":{ + "type":"integer", + "max":50, + "min":1 + }, + "ListResourceServersRequest":{ + "type":"structure", + "required":["UserPoolId"], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "MaxResults":{ + "shape":"ListResourceServersLimitType", + "documentation":"

The maximum number of resource servers to return.

" + }, + "NextToken":{ + "shape":"PaginationKeyType", + "documentation":"

A pagination token.

" + } + } + }, + "ListResourceServersResponse":{ + "type":"structure", + "required":["ResourceServers"], + "members":{ + "ResourceServers":{ + "shape":"ResourceServersListType", + "documentation":"

The resource servers.

" + }, + "NextToken":{ + "shape":"PaginationKeyType", + "documentation":"

A pagination token.

" + } + } + }, "ListUserImportJobsRequest":{ "type":"structure", "required":[ @@ -4392,6 +4785,30 @@ "min":1, "pattern":"[\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}]+" }, + "ProviderNameTypeV1":{ + "type":"string", + "max":32, + "min":1, + "pattern":"[^_][\\p{L}\\p{M}\\p{S}\\p{N}\\p{P}][^_]+" + }, + "ProviderUserIdentifierType":{ + "type":"structure", + "members":{ + "ProviderName":{ + "shape":"ProviderNameType", + "documentation":"

The name of the provider, for example, Facebook, Google, or Login with Amazon.

" + }, + "ProviderAttributeName":{ + "shape":"StringType", + "documentation":"

The name of the provider attribute to link to, for example, NameID.

" + }, + "ProviderAttributeValue":{ + "shape":"StringType", + "documentation":"

The value of the provider attribute to link to, for example, xxxxx_account.

" + } + }, + "documentation":"

A container for information about an identity provider for a user pool.

" + }, "ProvidersListType":{ "type":"list", "member":{"shape":"ProviderDescription"}, @@ -4462,6 +4879,78 @@ "documentation":"

This exception is thrown when the Amazon Cognito service cannot find the requested resource.

", "exception":true }, + "ResourceServerIdentifierType":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[\\x21\\x23-\\x5B\\x5D-\\x7E]+" + }, + "ResourceServerNameType":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[\\w\\s+=,.@-]+" + }, + "ResourceServerScopeDescriptionType":{ + "type":"string", + "max":256, + "min":1 + }, + "ResourceServerScopeListType":{ + "type":"list", + "member":{"shape":"ResourceServerScopeType"}, + "max":25 + }, + "ResourceServerScopeNameType":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[\\x21\\x23-\\x2E\\x30-\\x5B\\x5D-\\x7E]+" + }, + "ResourceServerScopeType":{ + "type":"structure", + "required":[ + "ScopeName", + "ScopeDescription" + ], + "members":{ + "ScopeName":{ + "shape":"ResourceServerScopeNameType", + "documentation":"

The name of the scope.

" + }, + "ScopeDescription":{ + "shape":"ResourceServerScopeDescriptionType", + "documentation":"

A description of the scope.

" + } + }, + "documentation":"

A resource server scope.

" + }, + "ResourceServerType":{ + "type":"structure", + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool that hosts the resource server.

" + }, + "Identifier":{ + "shape":"ResourceServerIdentifierType", + "documentation":"

The identifier for the resource server.

" + }, + "Name":{ + "shape":"ResourceServerNameType", + "documentation":"

The name of the resource server.

" + }, + "Scopes":{ + "shape":"ResourceServerScopeListType", + "documentation":"

A list of scopes that are defined for the resource server.

" + } + }, + "documentation":"

A container for information about a resource server for a user pool.

" + }, + "ResourceServersListType":{ + "type":"list", + "member":{"shape":"ResourceServerType"} + }, "RespondToAuthChallengeRequest":{ "type":"structure", "required":[ @@ -4569,7 +5058,8 @@ }, "ScopeListType":{ "type":"list", - "member":{"shape":"ScopeType"} + "member":{"shape":"ScopeType"}, + "max":25 }, "ScopeType":{ "type":"string", @@ -4598,6 +5088,38 @@ "max":2048, "min":20 }, + "SetUICustomizationRequest":{ + "type":"structure", + "required":["UserPoolId"], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "ClientId":{ + "shape":"ClientIdType", + "documentation":"

The client ID for the client app.

" + }, + "CSS":{ + "shape":"CSSType", + "documentation":"

The CSS values in the UI customization.

" + }, + "ImageFile":{ + "shape":"ImageFileType", + "documentation":"

The uploaded logo image for the UI customization.

" + } + } + }, + "SetUICustomizationResponse":{ + "type":"structure", + "required":["UICustomization"], + "members":{ + "UICustomization":{ + "shape":"UICustomizationType", + "documentation":"

The UI customization information.

" + } + } + }, "SetUserSettingsRequest":{ "type":"structure", "required":[ @@ -4809,6 +5331,40 @@ "documentation":"

This exception is thrown when the user has made too many requests for a given operation.

", "exception":true }, + "UICustomizationType":{ + "type":"structure", + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "ClientId":{ + "shape":"ClientIdType", + "documentation":"

The client ID for the client app.

" + }, + "ImageUrl":{ + "shape":"ImageUrlType", + "documentation":"

The logo image for the UI customization.

" + }, + "CSS":{ + "shape":"CSSType", + "documentation":"

The CSS values in the UI customization.

" + }, + "CSSVersion":{ + "shape":"CSSVersionType", + "documentation":"

The CSS version number.

" + }, + "LastModifiedDate":{ + "shape":"DateType", + "documentation":"

The last-modified date for the UI customization.

" + }, + "CreationDate":{ + "shape":"DateType", + "documentation":"

The creation date for the UI customization.

" + } + }, + "documentation":"

A container for the UI customization information for a user pool's built-in app UI.

" + }, "UnexpectedLambdaException":{ "type":"structure", "members":{ @@ -4944,6 +5500,42 @@ } } }, + "UpdateResourceServerRequest":{ + "type":"structure", + "required":[ + "UserPoolId", + "Identifier", + "Name" + ], + "members":{ + "UserPoolId":{ + "shape":"UserPoolIdType", + "documentation":"

The user pool ID for the user pool.

" + }, + "Identifier":{ + "shape":"ResourceServerIdentifierType", + "documentation":"

The identifier for the resource server.

" + }, + "Name":{ + "shape":"ResourceServerNameType", + "documentation":"

The name of the resource server.

" + }, + "Scopes":{ + "shape":"ResourceServerScopeListType", + "documentation":"

The scope values to be set for the resource server.

" + } + } + }, + "UpdateResourceServerResponse":{ + "type":"structure", + "required":["ResourceServer"], + "members":{ + "ResourceServer":{ + "shape":"ResourceServerType", + "documentation":"

The resource server.

" + } + } + }, "UpdateUserAttributesRequest":{ "type":"structure", "required":[ @@ -5017,7 +5609,7 @@ }, "LogoutURLs":{ "shape":"LogoutURLsListType", - "documentation":"

A list ofallowed logout URLs for the identity providers.

" + "documentation":"

A list of allowed logout URLs for the identity providers.

" }, "DefaultRedirectURI":{ "shape":"RedirectUrlType", @@ -5080,6 +5672,10 @@ "shape":"EmailVerificationSubjectType", "documentation":"

The subject of the email verification message.

" }, + "VerificationMessageTemplate":{ + "shape":"VerificationMessageTemplateType", + "documentation":"

The template for verification messages.

" + }, "SmsAuthenticationMessage":{ "shape":"SmsVerificationMessageType", "documentation":"

The contents of the SMS authentication message.

" @@ -5329,7 +5925,7 @@ }, "LogoutURLs":{ "shape":"LogoutURLsListType", - "documentation":"

A list ofallowed logout URLs for the identity providers.

" + "documentation":"

A list of allowed logout URLs for the identity providers.

" }, "DefaultRedirectURI":{ "shape":"RedirectUrlType", @@ -5349,7 +5945,7 @@ "box":true } }, - "documentation":"

A user pool of the client type.

" + "documentation":"

Contains information about a user pool client.

" }, "UserPoolDescriptionType":{ "type":"structure", @@ -5471,6 +6067,10 @@ "shape":"AliasAttributesListType", "documentation":"

Specifies the attributes that are aliased in a user pool.

" }, + "UsernameAttributes":{ + "shape":"UsernameAttributesListType", + "documentation":"

Specifies whether email addresses or phone numbers can be specified as usernames when a user signs up.

" + }, "SmsVerificationMessage":{ "shape":"SmsVerificationMessageType", "documentation":"

The contents of the SMS verification message.

" @@ -5483,6 +6083,10 @@ "shape":"EmailVerificationSubjectType", "documentation":"

The subject of the email verification message.

" }, + "VerificationMessageTemplate":{ + "shape":"VerificationMessageTemplateType", + "documentation":"

The template for verification messages.

" + }, "SmsAuthenticationMessage":{ "shape":"SmsVerificationMessageType", "documentation":"

The contents of the SMS authentication message.

" @@ -5572,6 +6176,17 @@ }, "documentation":"

The user type.

" }, + "UsernameAttributeType":{ + "type":"string", + "enum":[ + "phone_number", + "email" + ] + }, + "UsernameAttributesListType":{ + "type":"list", + "member":{"shape":"UsernameAttributeType"} + }, "UsernameExistsException":{ "type":"structure", "members":{ @@ -5594,6 +6209,36 @@ "type":"list", "member":{"shape":"UserType"} }, + "VerificationMessageTemplateType":{ + "type":"structure", + "members":{ + "SmsMessage":{ + "shape":"SmsVerificationMessageType", + "documentation":"

The SMS message template.

" + }, + "EmailMessage":{ + "shape":"EmailVerificationMessageType", + "documentation":"

The email message template.

" + }, + "EmailSubject":{ + "shape":"EmailVerificationSubjectType", + "documentation":"

The subject line for the email message template.

" + }, + "EmailMessageByLink":{ + "shape":"EmailVerificationMessageByLinkType", + "documentation":"

The email message template for sending a confirmation link to the user.

" + }, + "EmailSubjectByLink":{ + "shape":"EmailVerificationSubjectByLinkType", + "documentation":"

The subject line for the email message template for sending a confirmation link to the user.

" + }, + "DefaultEmailOption":{ + "shape":"DefaultEmailOptionType", + "documentation":"

The default email option.

" + } + }, + "documentation":"

The template for verification messages.

" + }, "VerifiedAttributeType":{ "type":"string", "enum":[ diff --git a/services/config/src/main/resources/codegen-resources/service-2.json b/services/config/src/main/resources/codegen-resources/service-2.json index 3258b4071c55..e951debb4228 100644 --- a/services/config/src/main/resources/codegen-resources/service-2.json +++ b/services/config/src/main/resources/codegen-resources/service-2.json @@ -239,6 +239,21 @@ ], "documentation":"

Returns the number of resources that are compliant and the number that are noncompliant. You can specify one or more resource types to get these numbers for each resource type. The maximum number returned is 100.

" }, + "GetDiscoveredResourceCounts":{ + "name":"GetDiscoveredResourceCounts", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDiscoveredResourceCountsRequest"}, + "output":{"shape":"GetDiscoveredResourceCountsResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"InvalidLimitException"}, + {"shape":"InvalidNextTokenException"} + ], + "documentation":"

Returns the resource types, the number of each resource type, and the total number of resources that AWS Config is recording in this region for your AWS account.

Example

  1. AWS Config is recording three resource types in the US East (Ohio) Region for your account: 25 EC2 instances, 20 IAM users, and 15 S3 buckets.

  2. You make a call to the GetDiscoveredResourceCounts action and specify that you want all resource types.

  3. AWS Config returns the following:

The response is paginated. By default, AWS Config lists 100 ResourceCount objects on each page. You can customize this number with the limit parameter. The response includes a nextToken string. To get the next page of results, run the request again and specify the string for the nextToken parameter.

If you make a call to the GetDiscoveredResourceCounts action, you may not immediately receive resource counts in the following situations:

It may take a few minutes for AWS Config to record and count your resources. Wait a few minutes and then retry the GetDiscoveredResourceCounts action.

" + }, "GetResourceConfigHistory":{ "name":"GetResourceConfigHistory", "http":{ @@ -255,7 +270,7 @@ {"shape":"NoAvailableConfigurationRecorderException"}, {"shape":"ResourceNotDiscoveredException"} ], - "documentation":"

Returns a list of configuration items for the specified resource. The list contains details about each state of the resource during the specified time interval.

The response is paginated, and by default, AWS Config returns a limit of 10 configuration items per page. You can customize this number with the limit parameter. The response includes a nextToken string, and to get the next page of results, run the request again and enter this string for the nextToken parameter.

Each call to the API is limited to span a duration of seven days. It is likely that the number of records returned is smaller than the specified limit. In such cases, you can make another call, using the nextToken.

" + "documentation":"

Returns a list of configuration items for the specified resource. The list contains details about each state of the resource during the specified time interval.

The response is paginated. By default, AWS Config returns a limit of 10 configuration items per page. You can customize this number with the limit parameter. The response includes a nextToken string. To get the next page of results, run the request again and specify the string for the nextToken parameter.

Each call to the API is limited to span a duration of seven days. It is likely that the number of records returned is smaller than the specified limit. In such cases, you can make another call, using the nextToken.

" }, "ListDiscoveredResources":{ "name":"ListDiscoveredResources", @@ -271,7 +286,7 @@ {"shape":"InvalidNextTokenException"}, {"shape":"NoAvailableConfigurationRecorderException"} ], - "documentation":"

Accepts a resource type and returns a list of resource identifiers for the resources of that type. A resource identifier includes the resource type, ID, and (if available) the custom resource name. The results consist of resources that AWS Config has discovered, including those that AWS Config is not currently recording. You can narrow the results to include only resources that have specific resource IDs or a resource name.

You can specify either resource IDs or a resource name but not both in the same request.

The response is paginated, and by default AWS Config lists 100 resource identifiers on each page. You can customize this number with the limit parameter. The response includes a nextToken string, and to get the next page of results, run the request again and enter this string for the nextToken parameter.

" + "documentation":"

Accepts a resource type and returns a list of resource identifiers for the resources of that type. A resource identifier includes the resource type, ID, and (if available) the custom resource name. The results consist of resources that AWS Config has discovered, including those that AWS Config is not currently recording. You can narrow the results to include only resources that have specific resource IDs or a resource name.

You can specify either resource IDs or a resource name but not both in the same request.

The response is paginated. By default, AWS Config lists 100 resource identifiers on each page. You can customize this number with the limit parameter. The response includes a nextToken string. To get the next page of results, run the request again and specify the string for the nextToken parameter.

" }, "PutConfigRule":{ "name":"PutConfigRule", @@ -385,6 +400,11 @@ "AllSupported":{"type":"boolean"}, "AvailabilityZone":{"type":"string"}, "AwsRegion":{"type":"string"}, + "BaseResourceId":{ + "type":"string", + "max":768, + "min":1 + }, "Boolean":{"type":"boolean"}, "ChannelName":{ "type":"string", @@ -438,7 +458,7 @@ "documentation":"

The type of the AWS resource that was evaluated.

" }, "ResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The ID of the AWS resource that was evaluated.

" }, "Compliance":{ @@ -551,7 +571,7 @@ "documentation":"

The time that the next delivery occurs.

" } }, - "documentation":"

A list that contains the status of the delivery of either the snapshot or the configuration history to the specified Amazon S3 bucket.

" + "documentation":"

Provides status of the delivery of the snapshot or the configuration history to the specified Amazon S3 bucket. Also provides the status of notifications about the Amazon S3 delivery to the specified Amazon SNS topic.

" }, "ConfigRule":{ "type":"structure", @@ -1044,7 +1064,7 @@ "documentation":"

The types of AWS resources for which you want compliance information; for example, AWS::EC2::Instance. For this action, you can specify that the resource type is an AWS account by specifying AWS::::Account.

" }, "ResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The ID of the AWS resource for which you want compliance information. You can specify only one resource ID. If you specify a resource ID, you must also specify a type for ResourceType.

" }, "ComplianceTypes":{ @@ -1236,7 +1256,7 @@ "documentation":"

The type of AWS resource that was evaluated.

" }, "ComplianceResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The ID of the AWS resource that was evaluated.

" }, "ComplianceType":{ @@ -1310,7 +1330,7 @@ "documentation":"

The type of AWS resource that was evaluated.

" }, "ResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The ID of the evaluated AWS resource.

" } }, @@ -1379,7 +1399,7 @@ "documentation":"

The type of the AWS resource for which you want compliance information.

" }, "ResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The ID of the AWS resource for which you want compliance information.

" }, "ComplianceTypes":{ @@ -1437,6 +1457,40 @@ }, "documentation":"

" }, + "GetDiscoveredResourceCountsRequest":{ + "type":"structure", + "members":{ + "resourceTypes":{ + "shape":"ResourceTypes", + "documentation":"

The comma-separated list that specifies the resource types that you want the AWS Config to return. For example, (\"AWS::EC2::Instance\", \"AWS::IAM::User\").

If a value for resourceTypes is not specified, AWS Config returns all resource types that AWS Config is recording in the region for your account.

If the configuration recorder is turned off, AWS Config returns an empty list of ResourceCount objects. If the configuration recorder is not recording a specific resource type (for example, S3 buckets), that resource type is not returned in the list of ResourceCount objects.

" + }, + "limit":{ + "shape":"Limit", + "documentation":"

The maximum number of ResourceCount objects returned on each page. The default is 100. You cannot specify a limit greater than 100. If you specify 0, AWS Config uses the default.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

The nextToken string returned on a previous page that you use to get the next page of results in a paginated response.

" + } + } + }, + "GetDiscoveredResourceCountsResponse":{ + "type":"structure", + "members":{ + "totalDiscoveredResources":{ + "shape":"Long", + "documentation":"

The total number of resources that AWS Config is recording in the region for your account. If you specify resource types in the request, AWS Config returns only the total number of resources for those resource types.

Example

  1. AWS Config is recording three resource types in the US East (Ohio) Region for your account: 25 EC2 instances, 20 IAM users, and 15 S3 buckets, for a total of 60 resources.

  2. You make a call to the GetDiscoveredResourceCounts action and specify the resource type, \"AWS::EC2::Instances\" in the request.

  3. AWS Config returns 25 for totalDiscoveredResources.

" + }, + "resourceCounts":{ + "shape":"ResourceCounts", + "documentation":"

The list of ResourceCount objects. Each object is listed in descending order by the number of resources.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

The string that you use in a subsequent request to get the next page of results in a paginated response.

" + } + } + }, "GetResourceConfigHistoryRequest":{ "type":"structure", "required":[ @@ -1647,6 +1701,7 @@ }, "documentation":"

" }, + "Long":{"type":"long"}, "MaxNumberOfConfigRulesExceededException":{ "type":"structure", "members":{ @@ -1876,6 +1931,24 @@ "member":{"shape":"Relationship"} }, "RelationshipName":{"type":"string"}, + "ResourceCount":{ + "type":"structure", + "members":{ + "resourceType":{ + "shape":"ResourceType", + "documentation":"

The resource type, for example \"AWS::EC2::Instance\".

" + }, + "count":{ + "shape":"Long", + "documentation":"

The number of resources.

" + } + }, + "documentation":"

An object that contains the resource type and the number of resources.

" + }, + "ResourceCounts":{ + "type":"list", + "member":{"shape":"ResourceCount"} + }, "ResourceCreationTime":{"type":"timestamp"}, "ResourceDeletionTime":{"type":"timestamp"}, "ResourceId":{"type":"string"}, @@ -1961,7 +2034,14 @@ "AWS::Redshift::ClusterSecurityGroup", "AWS::Redshift::ClusterSubnetGroup", "AWS::Redshift::EventSubscription", - "AWS::CloudWatch::Alarm" + "AWS::CloudWatch::Alarm", + "AWS::CloudFormation::Stack", + "AWS::DynamoDB::Table", + "AWS::AutoScaling::AutoScalingGroup", + "AWS::AutoScaling::LaunchConfiguration", + "AWS::AutoScaling::ScalingPolicy", + "AWS::AutoScaling::ScheduledAction", + "AWS::CodeBuild::Project" ] }, "ResourceTypeList":{ @@ -1988,14 +2068,14 @@ }, "TagKey":{ "shape":"StringWithCharLimit128", - "documentation":"

The tag key that is applied to only those AWS resources that you want you want to trigger an evaluation for the rule.

" + "documentation":"

The tag key that is applied to only those AWS resources that you want to trigger an evaluation for the rule.

" }, "TagValue":{ "shape":"StringWithCharLimit256", "documentation":"

The tag value applied to only those AWS resources that you want to trigger an evaluation for the rule. If you specify a value for TagValue, you must also specify a value for TagKey.

" }, "ComplianceResourceId":{ - "shape":"StringWithCharLimit256", + "shape":"BaseResourceId", "documentation":"

The IDs of the only AWS resource that you want to trigger an evaluation for the rule. If you specify a resource ID, you must specify one resource type for ComplianceResourceTypes.

" } }, @@ -2014,7 +2094,7 @@ }, "SourceIdentifier":{ "shape":"StringWithCharLimit256", - "documentation":"

For AWS Config managed rules, a predefined identifier from a list. For example, IAM_PASSWORD_POLICY is a managed rule. To reference a managed rule, see Using AWS Managed Config Rules.

For custom rules, the identifier is the Amazon Resource Name (ARN) of the rule's AWS Lambda function, such as arn:aws:lambda:us-east-1:123456789012:function:custom_rule_name.

" + "documentation":"

For AWS Config managed rules, a predefined identifier from a list. For example, IAM_PASSWORD_POLICY is a managed rule. To reference a managed rule, see Using AWS Managed Config Rules.

For custom rules, the identifier is the Amazon Resource Name (ARN) of the rule's AWS Lambda function, such as arn:aws:lambda:us-east-2:123456789012:function:custom_rule_name.

" }, "SourceDetails":{ "shape":"SourceDetails", diff --git a/services/devicefarm/src/main/resources/codegen-resources/service-2.json b/services/devicefarm/src/main/resources/codegen-resources/service-2.json index 3ff50b8e0774..7a7f13186148 100644 --- a/services/devicefarm/src/main/resources/codegen-resources/service-2.json +++ b/services/devicefarm/src/main/resources/codegen-resources/service-2.json @@ -852,6 +852,10 @@ "type":"list", "member":{"shape":"AmazonResourceName"} }, + "AndroidPaths":{ + "type":"list", + "member":{"shape":"String"} + }, "AppPackagesCleanup":{"type":"boolean"}, "ArgumentException":{ "type":"structure", @@ -925,7 +929,9 @@ "EXPLORER_SUMMARY_LOG", "APPLICATION_CRASH_REPORT", "XCTEST_LOG", - "VIDEO" + "VIDEO", + "CUSTOMER_ARTIFACT", + "CUSTOMER_ARTIFACT_LOG" ] }, "Artifacts":{ @@ -958,6 +964,11 @@ }, "documentation":"

Represents the amount of CPU that an app is using on a physical device.

Note that this does not represent system-wide CPU usage.

" }, + "ClientId":{ + "type":"string", + "max":64, + "min":0 + }, "ContentType":{ "type":"string", "max":64, @@ -1150,10 +1161,22 @@ "shape":"AmazonResourceName", "documentation":"

The Amazon Resource Name (ARN) of the device for which you want to create a remote access session.

" }, + "sshPublicKey":{ + "shape":"SshPublicKey", + "documentation":"

The public key of the ssh key pair you want to use for connecting to remote devices in your remote debugging session. This is only required if remoteDebugEnabled is set to true.

" + }, + "remoteDebugEnabled":{ + "shape":"Boolean", + "documentation":"

Set to true if you want to access devices remotely for debugging in your remote access session.

" + }, "name":{ "shape":"Name", "documentation":"

The name of the remote access session that you wish to create.

" }, + "clientId":{ + "shape":"ClientId", + "documentation":"

Unique identifier for the client. If you want access to multiple devices on the same client, you should pass the same clientId value in each call to CreateRemoteAccessSession. This is required only if remoteDebugEnabled is set to true true.

" + }, "configuration":{ "shape":"CreateRemoteAccessSessionConfiguration", "documentation":"

The configuration information for the remote access session request.

" @@ -1212,6 +1235,24 @@ "type":"string", "enum":["USD"] }, + "CustomerArtifactPaths":{ + "type":"structure", + "members":{ + "iosPaths":{ + "shape":"IosPaths", + "documentation":"

Comma-separated list of paths on the iOS device where the artifacts generated by the customer's tests will be pulled from.

" + }, + "androidPaths":{ + "shape":"AndroidPaths", + "documentation":"

Comma-separated list of paths on the Android device where the artifacts generated by the customer's tests will be pulled from.

" + }, + "deviceHostPaths":{ + "shape":"DeviceHostPaths", + "documentation":"

Comma-separated list of paths in the test execution environment where the artifacts generated by the customer's tests will be pulled from.

" + } + }, + "documentation":"

A JSON object specifying the paths where the artifacts generated by the customer's tests, on the device or in the test environment, will be pulled from.

Specify deviceHostPaths and optionally specify either iosPaths or androidPaths.

For web app tests, you can specify both iosPaths and androidPaths.

" + }, "DateTime":{"type":"timestamp"}, "DeleteDevicePoolRequest":{ "type":"structure", @@ -1376,6 +1417,10 @@ "shape":"Boolean", "documentation":"

Specifies whether remote access has been enabled for the specified device.

" }, + "remoteDebugEnabled":{ + "shape":"Boolean", + "documentation":"

This flag is set to true if remote debugging is enabled for the device.

" + }, "fleetType":{ "shape":"String", "documentation":"

The type of fleet to which this device belongs. Possible values for fleet type are PRIVATE and PUBLIC.

" @@ -1395,6 +1440,7 @@ "FORM_FACTOR", "MANUFACTURER", "REMOTE_ACCESS_ENABLED", + "REMOTE_DEBUG_ENABLED", "APPIUM_VERSION" ] }, @@ -1405,6 +1451,10 @@ "TABLET" ] }, + "DeviceHostPaths":{ + "type":"list", + "member":{"shape":"String"} + }, "DeviceMinutes":{ "type":"structure", "members":{ @@ -1524,6 +1574,10 @@ "STOPPED" ] }, + "ExecutionResultCode":{ + "type":"string", + "enum":["PARSING_FAILED"] + }, "ExecutionStatus":{ "type":"string", "enum":[ @@ -1832,6 +1886,10 @@ }, "documentation":"

Represents the result of a get upload request.

" }, + "HostAddress":{ + "type":"string", + "max":1024 + }, "IdempotencyException":{ "type":"structure", "members":{ @@ -1890,6 +1948,10 @@ "documentation":"

Represents the response from the server after AWS Device Farm makes a request to install to a remote access session.

" }, "Integer":{"type":"integer"}, + "IosPaths":{ + "type":"list", + "member":{"shape":"String"} + }, "Job":{ "type":"structure", "members":{ @@ -2887,6 +2949,18 @@ "shape":"Device", "documentation":"

The device (phone or tablet) used in the remote access session.

" }, + "remoteDebugEnabled":{ + "shape":"Boolean", + "documentation":"

This flag is set to true if remote debugging is enabled for the remote access session.

" + }, + "hostAddress":{ + "shape":"HostAddress", + "documentation":"

IP address of the EC2 host where you need to connect to remotely debug devices. Only returned if remote debugging is enabled for the remote access session.

" + }, + "clientId":{ + "shape":"ClientId", + "documentation":"

Unique identifier of your client for the remote access session. Only returned if remote debugging is enabled for the remote access session.

" + }, "billingMethod":{ "shape":"BillingMethod", "documentation":"

The billing method of the remote access session. Possible values include METERED or UNMETERED. For more information about metered devices, see AWS Device Farm terminology.\"

" @@ -2898,6 +2972,10 @@ "endpoint":{ "shape":"String", "documentation":"

The endpoint for the remote access sesssion.

" + }, + "deviceUdid":{ + "shape":"String", + "documentation":"

Unique device identifier for the remote device. Only returned if remote debugging is enabled for the remote access session.

" } }, "documentation":"

Represents information about the remote access session.

" @@ -3043,9 +3121,21 @@ "networkProfile":{ "shape":"NetworkProfile", "documentation":"

The network profile being used for a test run.

" + }, + "parsingResultUrl":{ + "shape":"String", + "documentation":"

Read-only URL for an object in S3 bucket where you can get the parsing results of the test package. If the test package doesn't parse, the reason why it doesn't parse appears in the file that this URL points to.

" + }, + "resultCode":{ + "shape":"ExecutionResultCode", + "documentation":"

Supporting field for the result field. Set only if result is SKIPPED. PARSING_FAILED if the result is skipped because of test package parsing failure.

" + }, + "customerArtifactPaths":{ + "shape":"CustomerArtifactPaths", + "documentation":"

Output CustomerArtifactPaths object for the test run.

" } }, - "documentation":"

Represents an app on a set of devices with a specific test and configuration.

" + "documentation":"

Represents a test run on a set of devices with a given app package, test parameters, etc.

" }, "Runs":{ "type":"list", @@ -3114,6 +3204,10 @@ "shape":"Location", "documentation":"

Information about the location that is used for the run.

" }, + "customerArtifactPaths":{ + "shape":"CustomerArtifactPaths", + "documentation":"

Input CustomerArtifactPaths object for the scheduled run configuration.

" + }, "radios":{ "shape":"Radios", "documentation":"

Information about the radio states for the run.

" @@ -3212,6 +3306,11 @@ "documentation":"

There was a problem with the service account.

", "exception":true }, + "SshPublicKey":{ + "type":"string", + "max":8192, + "min":0 + }, "StopRemoteAccessSessionRequest":{ "type":"structure", "required":["arn"], diff --git a/services/directconnect/src/main/resources/codegen-resources/service-2.json b/services/directconnect/src/main/resources/codegen-resources/service-2.json index 81bf2a111754..dd91dc19742a 100644 --- a/services/directconnect/src/main/resources/codegen-resources/service-2.json +++ b/services/directconnect/src/main/resources/codegen-resources/service-2.json @@ -108,7 +108,7 @@ {"shape":"DirectConnectServerException"}, {"shape":"DirectConnectClientException"} ], - "documentation":"

Associates a virtual interface with a specified link aggregation group (LAG) or connection. Connectivity to AWS is temporarily interrupted as the virtual interface is being migrated. If the target connection or LAG has an associated virtual interface with a conflicting VLAN number or a conflicting IP address, the operation fails.

Virtual interfaces associated with a hosted connection cannot be associated with a LAG; hosted connections must be migrated along with their virtual interfaces using AssociateHostedConnection.

Hosted virtual interfaces (an interface for which the owner of the connection is not the owner of physical connection) can only be reassociated by the owner of the physical connection.

" + "documentation":"

Associates a virtual interface with a specified link aggregation group (LAG) or connection. Connectivity to AWS is temporarily interrupted as the virtual interface is being migrated. If the target connection or LAG has an associated virtual interface with a conflicting VLAN number or a conflicting IP address, the operation fails.

Virtual interfaces associated with a hosted connection cannot be associated with a LAG; hosted connections must be migrated along with their virtual interfaces using AssociateHostedConnection.

In order to reassociate a virtual interface to a new connection or LAG, the requester must own either the virtual interface itself or the connection to which the virtual interface is currently associated. Additionally, the requester must own the connection or LAG to which the virtual interface will be newly associated.

" }, "ConfirmConnection":{ "name":"ConfirmConnection", @@ -136,7 +136,7 @@ {"shape":"DirectConnectServerException"}, {"shape":"DirectConnectClientException"} ], - "documentation":"

Accept ownership of a private virtual interface created by another customer.

After the virtual interface owner calls this function, the virtual interface will be created and attached to the given virtual private gateway, and will be available for handling traffic.

" + "documentation":"

Accept ownership of a private virtual interface created by another customer.

After the virtual interface owner calls this function, the virtual interface will be created and attached to the given virtual private gateway or direct connect gateway, and will be available for handling traffic.

" }, "ConfirmPublicVirtualInterface":{ "name":"ConfirmPublicVirtualInterface", @@ -178,7 +178,35 @@ {"shape":"DirectConnectServerException"}, {"shape":"DirectConnectClientException"} ], - "documentation":"

Creates a new connection between the customer network and a specific AWS Direct Connect location.

A connection links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. An AWS Direct Connect location provides access to Amazon Web Services in the region it is associated with. You can establish connections with AWS Direct Connect locations in multiple regions, but a connection in one region does not provide connectivity to other regions.

You can automatically add the new connection to a link aggregation group (LAG) by specifying a LAG ID in the request. This ensures that the new connection is allocated on the same AWS Direct Connect endpoint that hosts the specified LAG. If there are no available ports on the endpoint, the request fails and no connection will be created.

" + "documentation":"

Creates a new connection between the customer network and a specific AWS Direct Connect location.

A connection links your internal network to an AWS Direct Connect location over a standard 1 gigabit or 10 gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. An AWS Direct Connect location provides access to Amazon Web Services in the region it is associated with. You can establish connections with AWS Direct Connect locations in multiple regions, but a connection in one region does not provide connectivity to other regions.

To find the locations for your region, use DescribeLocations.

You can automatically add the new connection to a link aggregation group (LAG) by specifying a LAG ID in the request. This ensures that the new connection is allocated on the same AWS Direct Connect endpoint that hosts the specified LAG. If there are no available ports on the endpoint, the request fails and no connection will be created.

" + }, + "CreateDirectConnectGateway":{ + "name":"CreateDirectConnectGateway", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDirectConnectGatewayRequest"}, + "output":{"shape":"CreateDirectConnectGatewayResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Creates a new direct connect gateway. A direct connect gateway is an intermediate object that enables you to connect a set of virtual interfaces and virtual private gateways. direct connect gateways are global and visible in any AWS region after they are created. The virtual interfaces and virtual private gateways that are connected through a direct connect gateway can be in different regions. This enables you to connect to a VPC in any region, regardless of the region in which the virtual interfaces are located, and pass traffic between them.

" + }, + "CreateDirectConnectGatewayAssociation":{ + "name":"CreateDirectConnectGatewayAssociation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDirectConnectGatewayAssociationRequest"}, + "output":{"shape":"CreateDirectConnectGatewayAssociationResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Creates an association between a direct connect gateway and a virtual private gateway (VGW). The VGW must be attached to a VPC and must not be associated with another direct connect gateway.

" }, "CreateInterconnect":{ "name":"CreateInterconnect", @@ -264,6 +292,34 @@ ], "documentation":"

Deletes the connection.

Deleting a connection only stops the AWS Direct Connect port hour and data transfer charges. You need to cancel separately with the providers any services or charges for cross-connects or network circuits that connect you to the AWS Direct Connect location.

" }, + "DeleteDirectConnectGateway":{ + "name":"DeleteDirectConnectGateway", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDirectConnectGatewayRequest"}, + "output":{"shape":"DeleteDirectConnectGatewayResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Deletes a direct connect gateway. You must first delete all virtual interfaces that are attached to the direct connect gateway and disassociate all virtual private gateways that are associated with the direct connect gateway.

" + }, + "DeleteDirectConnectGatewayAssociation":{ + "name":"DeleteDirectConnectGatewayAssociation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDirectConnectGatewayAssociationRequest"}, + "output":{"shape":"DeleteDirectConnectGatewayAssociationResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Deletes the association between a direct connect gateway and a virtual private gateway.

" + }, "DeleteInterconnect":{ "name":"DeleteInterconnect", "http":{ @@ -350,6 +406,48 @@ "documentation":"

Deprecated in favor of DescribeHostedConnections.

Returns a list of connections that have been provisioned on the given interconnect.

This is intended for use by AWS Direct Connect partners only.

", "deprecated":true }, + "DescribeDirectConnectGatewayAssociations":{ + "name":"DescribeDirectConnectGatewayAssociations", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDirectConnectGatewayAssociationsRequest"}, + "output":{"shape":"DescribeDirectConnectGatewayAssociationsResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Returns a list of all direct connect gateway and virtual private gateway (VGW) associations. Either a direct connect gateway ID or a VGW ID must be provided in the request. If a direct connect gateway ID is provided, the response returns all VGWs associated with the direct connect gateway. If a VGW ID is provided, the response returns all direct connect gateways associated with the VGW. If both are provided, the response only returns the association that matches both the direct connect gateway and the VGW.

" + }, + "DescribeDirectConnectGatewayAttachments":{ + "name":"DescribeDirectConnectGatewayAttachments", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDirectConnectGatewayAttachmentsRequest"}, + "output":{"shape":"DescribeDirectConnectGatewayAttachmentsResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Returns a list of all direct connect gateway and virtual interface (VIF) attachments. Either a direct connect gateway ID or a VIF ID must be provided in the request. If a direct connect gateway ID is provided, the response returns all VIFs attached to the direct connect gateway. If a VIF ID is provided, the response returns all direct connect gateways attached to the VIF. If both are provided, the response only returns the attachment that matches both the direct connect gateway and the VIF.

" + }, + "DescribeDirectConnectGateways":{ + "name":"DescribeDirectConnectGateways", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDirectConnectGatewaysRequest"}, + "output":{"shape":"DescribeDirectConnectGatewaysResult"}, + "errors":[ + {"shape":"DirectConnectServerException"}, + {"shape":"DirectConnectClientException"} + ], + "documentation":"

Returns a list of direct connect gateways in your account. Deleted direct connect gateways are not returned. You can provide a direct connect gateway ID in the request to return information about the specific direct connect gateway only. Otherwise, if a direct connect gateway ID is not provided, information about all of your direct connect gateways is returned.

" + }, "DescribeHostedConnections":{ "name":"DescribeHostedConnections", "http":{ @@ -432,7 +530,7 @@ {"shape":"DirectConnectServerException"}, {"shape":"DirectConnectClientException"} ], - "documentation":"

Returns the list of AWS Direct Connect locations in the current AWS region. These are the locations that may be selected when calling CreateConnection or CreateInterconnect.

" + "documentation":"

Returns the list of AWS Direct Connect locations in the current AWS region. These are the locations that may be selected when calling CreateConnection or CreateInterconnect.

" }, "DescribeTags":{ "name":"DescribeTags", @@ -785,15 +883,16 @@ }, "ConfirmPrivateVirtualInterfaceRequest":{ "type":"structure", - "required":[ - "virtualInterfaceId", - "virtualGatewayId" - ], + "required":["virtualInterfaceId"], "members":{ "virtualInterfaceId":{"shape":"VirtualInterfaceId"}, "virtualGatewayId":{ "shape":"VirtualGatewayId", "documentation":"

ID of the virtual private gateway that will be attached to the virtual interface.

A virtual private gateway can be managed via the Amazon Virtual Private Cloud (VPC) console or the EC2 CreateVpnGateway action.

Default: None

" + }, + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

ID of the direct connect gateway that will be attached to the virtual interface.

A direct connect gateway can be managed via the AWS Direct Connect console or the CreateDirectConnectGateway action.

Default: None

" } }, "documentation":"

Container for the parameters to the ConfirmPrivateVirtualInterface operation.

" @@ -927,6 +1026,59 @@ }, "documentation":"

Container for the parameters to the CreateConnection operation.

" }, + "CreateDirectConnectGatewayAssociationRequest":{ + "type":"structure", + "required":[ + "directConnectGatewayId", + "virtualGatewayId" + ], + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + }, + "virtualGatewayId":{ + "shape":"VirtualGatewayId", + "documentation":"

The ID of the virtual private gateway.

Example: \"vgw-abc123ef\"

Default: None

" + } + }, + "documentation":"

Container for the parameters to the CreateDirectConnectGatewayAssociation operation.

" + }, + "CreateDirectConnectGatewayAssociationResult":{ + "type":"structure", + "members":{ + "directConnectGatewayAssociation":{ + "shape":"DirectConnectGatewayAssociation", + "documentation":"

The direct connect gateway association to be created.

" + } + }, + "documentation":"

Container for the response from the CreateDirectConnectGatewayAssociation API call

" + }, + "CreateDirectConnectGatewayRequest":{ + "type":"structure", + "required":["directConnectGatewayName"], + "members":{ + "directConnectGatewayName":{ + "shape":"DirectConnectGatewayName", + "documentation":"

The name of the direct connect gateway.

Example: \"My direct connect gateway\"

Default: None

" + }, + "amazonSideAsn":{ + "shape":"LongAsn", + "documentation":"

The autonomous system number (ASN) for Border Gateway Protocol (BGP) to be configured on the Amazon side of the connection. The ASN must be in the private range of 64,512 to 65,534 or 4,200,000,000 to 4,294,967,294

Example: 65200

Default: 64512

" + } + }, + "documentation":"

Container for the parameters to the CreateDirectConnectGateway operation.

" + }, + "CreateDirectConnectGatewayResult":{ + "type":"structure", + "members":{ + "directConnectGateway":{ + "shape":"DirectConnectGateway", + "documentation":"

The direct connect gateway to be created.

" + } + }, + "documentation":"

Container for the response from the CreateDirectConnectGateway API call

" + }, "CreateInterconnectRequest":{ "type":"structure", "required":[ @@ -1044,6 +1196,55 @@ }, "documentation":"

Container for the parameters to the DeleteConnection operation.

" }, + "DeleteDirectConnectGatewayAssociationRequest":{ + "type":"structure", + "required":[ + "directConnectGatewayId", + "virtualGatewayId" + ], + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + }, + "virtualGatewayId":{ + "shape":"VirtualGatewayId", + "documentation":"

The ID of the virtual private gateway.

Example: \"vgw-abc123ef\"

Default: None

" + } + }, + "documentation":"

Container for the parameters to the DeleteDirectConnectGatewayAssociation operation.

" + }, + "DeleteDirectConnectGatewayAssociationResult":{ + "type":"structure", + "members":{ + "directConnectGatewayAssociation":{ + "shape":"DirectConnectGatewayAssociation", + "documentation":"

The direct connect gateway association to be deleted.

" + } + }, + "documentation":"

Container for the response from the DeleteDirectConnectGatewayAssociation API call

" + }, + "DeleteDirectConnectGatewayRequest":{ + "type":"structure", + "required":["directConnectGatewayId"], + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + } + }, + "documentation":"

Container for the parameters to the DeleteDirectConnectGateway operation.

" + }, + "DeleteDirectConnectGatewayResult":{ + "type":"structure", + "members":{ + "directConnectGateway":{ + "shape":"DirectConnectGateway", + "documentation":"

The direct connect gateway to be deleted.

" + } + }, + "documentation":"

Container for the response from the DeleteDirectConnectGateway API call

" + }, "DeleteInterconnectRequest":{ "type":"structure", "required":["interconnectId"], @@ -1123,6 +1324,101 @@ }, "documentation":"

Container for the parameters to the DescribeConnections operation.

" }, + "DescribeDirectConnectGatewayAssociationsRequest":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + }, + "virtualGatewayId":{ + "shape":"VirtualGatewayId", + "documentation":"

The ID of the virtual private gateway.

Example: \"vgw-abc123ef\"

Default: None

" + }, + "maxResults":{ + "shape":"MaxResultSetSize", + "documentation":"

The maximum number of direct connect gateway associations to return per page.

Example: 15

Default: None

" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"

The token provided in the previous describe result to retrieve the next page of the result.

Default: None

" + } + }, + "documentation":"

Container for the parameters to the DescribeDirectConnectGatewayAssociations operation.

" + }, + "DescribeDirectConnectGatewayAssociationsResult":{ + "type":"structure", + "members":{ + "directConnectGatewayAssociations":{ + "shape":"DirectConnectGatewayAssociationList", + "documentation":"

Information about the direct connect gateway associations.

" + }, + "nextToken":{"shape":"PaginationToken"} + }, + "documentation":"

Container for the response from the DescribeDirectConnectGatewayAssociations API call

" + }, + "DescribeDirectConnectGatewayAttachmentsRequest":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + }, + "virtualInterfaceId":{ + "shape":"VirtualInterfaceId", + "documentation":"

The ID of the virtual interface.

Example: \"dxvif-abc123ef\"

Default: None

" + }, + "maxResults":{ + "shape":"MaxResultSetSize", + "documentation":"

The maximum number of direct connect gateway attachments to return per page.

Example: 15

Default: None

" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"

The token provided in the previous describe result to retrieve the next page of the result.

Default: None

" + } + }, + "documentation":"

Container for the parameters to the DescribeDirectConnectGatewayAttachments operation.

" + }, + "DescribeDirectConnectGatewayAttachmentsResult":{ + "type":"structure", + "members":{ + "directConnectGatewayAttachments":{ + "shape":"DirectConnectGatewayAttachmentList", + "documentation":"

Information about the direct connect gateway attachments.

" + }, + "nextToken":{"shape":"PaginationToken"} + }, + "documentation":"

Container for the response from the DescribeDirectConnectGatewayAttachments API call

" + }, + "DescribeDirectConnectGatewaysRequest":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{ + "shape":"DirectConnectGatewayId", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

Default: None

" + }, + "maxResults":{ + "shape":"MaxResultSetSize", + "documentation":"

The maximum number of direct connect gateways to return per page.

Example: 15

Default: None

" + }, + "nextToken":{ + "shape":"PaginationToken", + "documentation":"

The token provided in the previous describe result to retrieve the next page of the result.

Default: None

" + } + }, + "documentation":"

Container for the parameters to the DescribeDirectConnectGateways operation.

" + }, + "DescribeDirectConnectGatewaysResult":{ + "type":"structure", + "members":{ + "directConnectGateways":{ + "shape":"DirectConnectGatewayList", + "documentation":"

Information about the direct connect gateways.

" + }, + "nextToken":{"shape":"PaginationToken"} + }, + "documentation":"

Container for the response from the DescribeDirectConnectGateways API call

" + }, "DescribeHostedConnectionsRequest":{ "type":"structure", "required":["connectionId"], @@ -1230,6 +1526,107 @@ "documentation":"

The API was called with invalid parameters. The error message will contain additional details about the cause.

", "exception":true }, + "DirectConnectGateway":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{"shape":"DirectConnectGatewayId"}, + "directConnectGatewayName":{"shape":"DirectConnectGatewayName"}, + "amazonSideAsn":{ + "shape":"LongAsn", + "documentation":"

The autonomous system number (ASN) for the Amazon side of the connection.

" + }, + "ownerAccount":{ + "shape":"OwnerAccount", + "documentation":"

The AWS account ID of the owner of the direct connect gateway.

" + }, + "directConnectGatewayState":{"shape":"DirectConnectGatewayState"}, + "stateChangeError":{"shape":"StateChangeError"} + }, + "documentation":"

A direct connect gateway is an intermediate object that enables you to connect virtual interfaces and virtual private gateways.

" + }, + "DirectConnectGatewayAssociation":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{"shape":"DirectConnectGatewayId"}, + "virtualGatewayId":{"shape":"VirtualGatewayId"}, + "virtualGatewayRegion":{"shape":"VirtualGatewayRegion"}, + "virtualGatewayOwnerAccount":{ + "shape":"OwnerAccount", + "documentation":"

The AWS account ID of the owner of the virtual private gateway.

" + }, + "associationState":{"shape":"DirectConnectGatewayAssociationState"}, + "stateChangeError":{"shape":"StateChangeError"} + }, + "documentation":"

The association between a direct connect gateway and virtual private gateway.

" + }, + "DirectConnectGatewayAssociationList":{ + "type":"list", + "member":{"shape":"DirectConnectGatewayAssociation"}, + "documentation":"

A list of direct connect gateway associations.

" + }, + "DirectConnectGatewayAssociationState":{ + "type":"string", + "documentation":"

State of the direct connect gateway association.

", + "enum":[ + "associating", + "associated", + "disassociating", + "disassociated" + ] + }, + "DirectConnectGatewayAttachment":{ + "type":"structure", + "members":{ + "directConnectGatewayId":{"shape":"DirectConnectGatewayId"}, + "virtualInterfaceId":{"shape":"VirtualInterfaceId"}, + "virtualInterfaceRegion":{"shape":"VirtualInterfaceRegion"}, + "virtualInterfaceOwnerAccount":{ + "shape":"OwnerAccount", + "documentation":"

The AWS account ID of the owner of the virtual interface.

" + }, + "attachmentState":{"shape":"DirectConnectGatewayAttachmentState"}, + "stateChangeError":{"shape":"StateChangeError"} + }, + "documentation":"

The association between a direct connect gateway and virtual interface.

" + }, + "DirectConnectGatewayAttachmentList":{ + "type":"list", + "member":{"shape":"DirectConnectGatewayAttachment"}, + "documentation":"

A list of direct connect gateway attachments.

" + }, + "DirectConnectGatewayAttachmentState":{ + "type":"string", + "documentation":"

State of the direct connect gateway attachment.

", + "enum":[ + "attaching", + "attached", + "detaching", + "detached" + ] + }, + "DirectConnectGatewayId":{ + "type":"string", + "documentation":"

The ID of the direct connect gateway.

Example: \"abcd1234-dcba-5678-be23-cdef9876ab45\"

" + }, + "DirectConnectGatewayList":{ + "type":"list", + "member":{"shape":"DirectConnectGateway"}, + "documentation":"

A list of direct connect gateways.

" + }, + "DirectConnectGatewayName":{ + "type":"string", + "documentation":"

The name of the direct connect gateway.

Example: \"My direct connect gateway\"

Default: None

" + }, + "DirectConnectGatewayState":{ + "type":"string", + "documentation":"

State of the direct connect gateway.

", + "enum":[ + "pending", + "available", + "deleting", + "deleted" + ] + }, "DirectConnectServerException":{ "type":"structure", "members":{ @@ -1448,6 +1845,12 @@ }, "documentation":"

A location is a network facility where AWS Direct Connect routers are available to be connected. Generally, these are colocation hubs where many network providers have equipment, and where cross connects can be delivered. Locations include a name and facility code, and must be provided when creating a connection.

" }, + "LongAsn":{"type":"long"}, + "MaxResultSetSize":{ + "type":"integer", + "documentation":"

Maximum number of objects to return per page.

", + "box":true + }, "NewBGPPeer":{ "type":"structure", "members":{ @@ -1464,8 +1867,7 @@ "required":[ "virtualInterfaceName", "vlan", - "asn", - "virtualGatewayId" + "asn" ], "members":{ "virtualInterfaceName":{"shape":"VirtualInterfaceName"}, @@ -1475,7 +1877,8 @@ "amazonAddress":{"shape":"AmazonAddress"}, "customerAddress":{"shape":"CustomerAddress"}, "addressFamily":{"shape":"AddressFamily"}, - "virtualGatewayId":{"shape":"VirtualGatewayId"} + "virtualGatewayId":{"shape":"VirtualGatewayId"}, + "directConnectGatewayId":{"shape":"DirectConnectGatewayId"} }, "documentation":"

A structure containing information about a new private virtual interface.

" }, @@ -1536,6 +1939,10 @@ "documentation":"

A structure containing information about a public virtual interface that will be provisioned on a connection.

" }, "OwnerAccount":{"type":"string"}, + "PaginationToken":{ + "type":"string", + "documentation":"

Token to retrieve the next page of the result.

" + }, "PartnerName":{"type":"string"}, "ProviderName":{"type":"string"}, "Region":{ @@ -1581,6 +1988,10 @@ "documentation":"

A list of routes to be advertised to the AWS network in this region (public virtual interface).

" }, "RouterConfig":{"type":"string"}, + "StateChangeError":{ + "type":"string", + "documentation":"

Error message when the state of an object fails to advance.

" + }, "Tag":{ "type":"structure", "required":["key"], @@ -1712,6 +2123,10 @@ "member":{"shape":"VirtualGateway"}, "documentation":"

A list of virtual private gateways.

" }, + "VirtualGatewayRegion":{ + "type":"string", + "documentation":"

The region in which the virtual private gateway is located.

Example: us-east-1

" + }, "VirtualGatewayState":{ "type":"string", "documentation":"

State of the virtual private gateway.

" @@ -1740,6 +2155,10 @@ "virtualInterfaceName":{"shape":"VirtualInterfaceName"}, "vlan":{"shape":"VLAN"}, "asn":{"shape":"ASN"}, + "amazonSideAsn":{ + "shape":"LongAsn", + "documentation":"

The autonomous system number (ASN) for the Amazon side of the connection.

" + }, "authKey":{"shape":"BGPAuthKey"}, "amazonAddress":{"shape":"AmazonAddress"}, "customerAddress":{"shape":"CustomerAddress"}, @@ -1750,6 +2169,7 @@ "documentation":"

Information for generating the customer router configuration.

" }, "virtualGatewayId":{"shape":"VirtualGatewayId"}, + "directConnectGatewayId":{"shape":"DirectConnectGatewayId"}, "routeFilterPrefixes":{"shape":"RouteFilterPrefixList"}, "bgpPeers":{"shape":"BGPPeerList"} }, @@ -1768,6 +2188,10 @@ "type":"string", "documentation":"

The name of the virtual interface assigned by the customer.

Example: \"My VPC\"

" }, + "VirtualInterfaceRegion":{ + "type":"string", + "documentation":"

The region in which the virtual interface is located.

Example: us-east-1

" + }, "VirtualInterfaceState":{ "type":"string", "documentation":"

State of the virtual interface.

", diff --git a/services/directory/src/main/resources/codegen-resources/service-2.json b/services/directory/src/main/resources/codegen-resources/service-2.json index 0a364af5afd0..6446bcdebb50 100644 --- a/services/directory/src/main/resources/codegen-resources/service-2.json +++ b/services/directory/src/main/resources/codegen-resources/service-2.json @@ -320,6 +320,24 @@ ], "documentation":"

Obtains information about the directories that belong to this account.

You can retrieve information about specific directories by passing the directory identifiers in the DirectoryIds parameter. Otherwise, all directories that belong to the current account are returned.

This operation supports pagination with the use of the NextToken request and response parameters. If more results are available, the DescribeDirectoriesResult.NextToken member contains a token that you pass in the next call to DescribeDirectories to retrieve the next set of items.

You can also specify a maximum number of return results with the Limit parameter.

" }, + "DescribeDomainControllers":{ + "name":"DescribeDomainControllers", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDomainControllersRequest"}, + "output":{"shape":"DescribeDomainControllersResult"}, + "errors":[ + {"shape":"EntityDoesNotExistException"}, + {"shape":"InvalidNextTokenException"}, + {"shape":"InvalidParameterException"}, + {"shape":"ClientException"}, + {"shape":"ServiceException"}, + {"shape":"UnsupportedOperationException"} + ], + "documentation":"

Provides information about any domain controllers in your directory.

" + }, "DescribeEventTopics":{ "name":"DescribeEventTopics", "http":{ @@ -618,6 +636,25 @@ ], "documentation":"

Updates a conditional forwarder that has been set up for your AWS directory.

" }, + "UpdateNumberOfDomainControllers":{ + "name":"UpdateNumberOfDomainControllers", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateNumberOfDomainControllersRequest"}, + "output":{"shape":"UpdateNumberOfDomainControllersResult"}, + "errors":[ + {"shape":"EntityDoesNotExistException"}, + {"shape":"DirectoryUnavailableException"}, + {"shape":"DomainControllerLimitExceededException"}, + {"shape":"InvalidParameterException"}, + {"shape":"UnsupportedOperationException"}, + {"shape":"ClientException"}, + {"shape":"ServiceException"} + ], + "documentation":"

Adds or removes domain controllers to or from the directory. Based on the difference between current value and new value (provided through this API call), domain controllers will be added or removed. It may take up to 45 minutes for any new domain controllers to become fully active once the requested number of domain controllers is updated. During this time, you cannot make another update request.

" + }, "UpdateRadius":{ "name":"UpdateRadius", "http":{ @@ -1072,7 +1109,10 @@ "shape":"Description", "documentation":"

A textual description for the directory. This label will appear on the AWS console Directory Details page after the directory is created.

" }, - "VpcSettings":{"shape":"DirectoryVpcSettings"} + "VpcSettings":{ + "shape":"DirectoryVpcSettings", + "documentation":"

Contains VPC information for the CreateDirectory or CreateMicrosoftAD operation.

" + } }, "documentation":"

Creates a Microsoft AD in the AWS cloud.

" }, @@ -1332,6 +1372,41 @@ }, "documentation":"

Contains the results of the DescribeDirectories operation.

" }, + "DescribeDomainControllersRequest":{ + "type":"structure", + "required":["DirectoryId"], + "members":{ + "DirectoryId":{ + "shape":"DirectoryId", + "documentation":"

Identifier of the directory for which to retrieve the domain controller information.

" + }, + "DomainControllerIds":{ + "shape":"DomainControllerIds", + "documentation":"

A list of identifiers for the domain controllers whose information will be provided.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The DescribeDomainControllers.NextToken value from a previous call to DescribeDomainControllers. Pass null if this is the first call.

" + }, + "Limit":{ + "shape":"Limit", + "documentation":"

The maximum number of items to return.

" + } + } + }, + "DescribeDomainControllersResult":{ + "type":"structure", + "members":{ + "DomainControllers":{ + "shape":"DomainControllers", + "documentation":"

List of the DomainController objects that were retrieved.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If not null, more results are available. Pass this value for the NextToken parameter in a subsequent call to DescribeDomainControllers retrieve the next set of items.

" + } + } + }, "DescribeEventTopicsRequest":{ "type":"structure", "members":{ @@ -1434,6 +1509,10 @@ "min":0, "pattern":"^([a-zA-Z0-9_])[\\\\a-zA-Z0-9_@#%*+=:?./!\\s-]*$" }, + "DesiredNumberOfDomainControllers":{ + "type":"integer", + "min":2 + }, "DirectoryConnectSettings":{ "type":"structure", "required":[ @@ -1566,6 +1645,10 @@ "SsoEnabled":{ "shape":"SsoEnabled", "documentation":"

Indicates if single-sign on is enabled for the directory. For more information, see EnableSso and DisableSso.

" + }, + "DesiredNumberOfDomainControllers":{ + "shape":"DesiredNumberOfDomainControllers", + "documentation":"

The desired number of domain controllers in the directory if the directory is Microsoft AD.

" } }, "documentation":"

Contains information about an AWS Directory Service directory.

" @@ -1769,6 +1852,86 @@ "type":"list", "member":{"shape":"IpAddr"} }, + "DomainController":{ + "type":"structure", + "members":{ + "DirectoryId":{ + "shape":"DirectoryId", + "documentation":"

Identifier of the directory where the domain controller resides.

" + }, + "DomainControllerId":{ + "shape":"DomainControllerId", + "documentation":"

Identifies a specific domain controller in the directory.

" + }, + "DnsIpAddr":{ + "shape":"IpAddr", + "documentation":"

The IP address of the domain controller.

" + }, + "VpcId":{ + "shape":"VpcId", + "documentation":"

The identifier of the VPC that contains the domain controller.

" + }, + "SubnetId":{ + "shape":"SubnetId", + "documentation":"

Identifier of the subnet in the VPC that contains the domain controller.

" + }, + "AvailabilityZone":{ + "shape":"AvailabilityZone", + "documentation":"

The Availability Zone where the domain controller is located.

" + }, + "Status":{ + "shape":"DomainControllerStatus", + "documentation":"

The status of the domain controller.

" + }, + "StatusReason":{ + "shape":"DomainControllerStatusReason", + "documentation":"

A description of the domain controller state.

" + }, + "LaunchTime":{ + "shape":"LaunchTime", + "documentation":"

Specifies when the domain controller was created.

" + }, + "StatusLastUpdatedDateTime":{ + "shape":"LastUpdatedDateTime", + "documentation":"

The date and time that the status was last updated.

" + } + }, + "documentation":"

Contains information about the domain controllers for a specified directory.

" + }, + "DomainControllerId":{ + "type":"string", + "pattern":"^dc-[0-9a-f]{10}$" + }, + "DomainControllerIds":{ + "type":"list", + "member":{"shape":"DomainControllerId"} + }, + "DomainControllerLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "RequestId":{"shape":"RequestId"} + }, + "documentation":"

The maximum allowed number of domain controllers per directory was exceeded. The default limit per directory is 20 domain controllers.

", + "exception":true + }, + "DomainControllerStatus":{ + "type":"string", + "enum":[ + "Creating", + "Active", + "Impaired", + "Restoring", + "Deleting", + "Deleted", + "Failed" + ] + }, + "DomainControllerStatusReason":{"type":"string"}, + "DomainControllers":{ + "type":"list", + "member":{"shape":"DomainController"} + }, "EnableRadiusRequest":{ "type":"structure", "required":[ @@ -2741,6 +2904,28 @@ }, "documentation":"

The result of an UpdateConditionalForwarder request.

" }, + "UpdateNumberOfDomainControllersRequest":{ + "type":"structure", + "required":[ + "DirectoryId", + "DesiredNumber" + ], + "members":{ + "DirectoryId":{ + "shape":"DirectoryId", + "documentation":"

Identifier of the directory to which the domain controllers will be added or removed.

" + }, + "DesiredNumber":{ + "shape":"DesiredNumberOfDomainControllers", + "documentation":"

The number of domain controllers desired in the directory.

" + } + } + }, + "UpdateNumberOfDomainControllersResult":{ + "type":"structure", + "members":{ + } + }, "UpdateRadiusRequest":{ "type":"structure", "required":[ diff --git a/services/discovery/src/main/resources/codegen-resources/service-2.json b/services/discovery/src/main/resources/codegen-resources/service-2.json index 9c721e0650db..7723b9c831a3 100644 --- a/services/discovery/src/main/resources/codegen-resources/service-2.json +++ b/services/discovery/src/main/resources/codegen-resources/service-2.json @@ -289,7 +289,7 @@ {"shape":"ServerInternalErrorException"}, {"shape":"OperationNotPermittedException"} ], - "documentation":"

Export the configuration data about discovered configuration items and relationships to an S3 bucket in a specified format.

" + "documentation":"

Begins the export of discovered data to an S3 bucket.

If you specify agentId in a filter, the task exports up to 72 hours of detailed data collected by the identified Application Discovery Agent, including network, process, and performance details. A time range for exported agent data may be set by using startTime and endTime. Export of detailed agent data is limited to five concurrently running exports.

If you do not include an agentId filter, summary data is exported that includes both AWS Agentless Discovery Connector data and summary data from AWS Discovery Agents. Export of summary data is limited to two exports per day.

" }, "StopDataCollectionByAgentIds":{ "name":"StopDataCollectionByAgentIds", @@ -794,6 +794,10 @@ "shape":"ExportIds", "documentation":"

One or more unique identifiers used to query the status of an export request.

" }, + "filters":{ + "shape":"ExportFilters", + "documentation":"

One or more filters.

" + }, "maxResults":{ "shape":"Integer", "documentation":"

The maximum number of volume results returned by DescribeExportTasks in paginated output. When this parameter is used, DescribeExportTasks only returns maxResults results in a single page along with a nextToken response element.

" @@ -889,6 +893,33 @@ "type":"list", "member":{"shape":"ExportDataFormat"} }, + "ExportFilter":{ + "type":"structure", + "required":[ + "name", + "values", + "condition" + ], + "members":{ + "name":{ + "shape":"FilterName", + "documentation":"

A single ExportFilter name. Supported filters: agentId.

" + }, + "values":{ + "shape":"FilterValues", + "documentation":"

A single agentId for a Discovery Agent. An agentId can be found using the DescribeAgents action. Typically an ADS agentId is in the form o-0123456789abcdef0.

" + }, + "condition":{ + "shape":"Condition", + "documentation":"

Supported condition: EQUALS

" + } + }, + "documentation":"

Used to select which agent's data is to be exported. A single agent ID may be selected for export using the StartExportTask action.

" + }, + "ExportFilters":{ + "type":"list", + "member":{"shape":"ExportFilter"} + }, "ExportIds":{ "type":"list", "member":{"shape":"ConfigurationsExportId"} @@ -904,26 +935,38 @@ "members":{ "exportId":{ "shape":"ConfigurationsExportId", - "documentation":"

A unique identifier that you can use to query the export.

" + "documentation":"

A unique identifier used to query an export.

" }, "exportStatus":{ "shape":"ExportStatus", - "documentation":"

The status of the configuration data export. The status can succeed, fail, or be in-progress.

" + "documentation":"

The status of the data export job.

" }, "statusMessage":{ "shape":"ExportStatusMessage", - "documentation":"

Helpful status messages for API callers. For example: Too many exports in the last 6 hours. Export in progress. Export was successful.

" + "documentation":"

A status message provided for API callers.

" }, "configurationsDownloadUrl":{ "shape":"ConfigurationsDownloadUrl", - "documentation":"

A URL for an Amazon S3 bucket where you can review the configuration data. The URL is displayed only if the export succeeded.

" + "documentation":"

A URL for an Amazon S3 bucket where you can review the exported data. The URL is displayed only if the export succeeded.

" }, "exportRequestTime":{ "shape":"ExportRequestTime", - "documentation":"

The time that the configuration data export was initiated.

" + "documentation":"

The time that the data export was initiated.

" + }, + "isTruncated":{ + "shape":"Boolean", + "documentation":"

If true, the export of agent information exceeded the size limit for a single export and the exported data is incomplete for the requested time range. To address this, select a smaller time range for the export by using startDate and endDate.

" + }, + "requestedStartTime":{ + "shape":"TimeStamp", + "documentation":"

The value of startTime parameter in the StartExportTask request. If no startTime was requested, this result does not appear in ExportInfo.

" + }, + "requestedEndTime":{ + "shape":"TimeStamp", + "documentation":"

The endTime used in the StartExportTask request. If no endTime was requested, this result does not appear in ExportInfo.

" } }, - "documentation":"

Information regarding the export status of the discovered data. The value is an array of objects.

" + "documentation":"

Information regarding the export status of discovered data. The value is an array of objects.

" }, "ExportRequestTime":{"type":"timestamp"}, "ExportStatus":{ @@ -1216,6 +1259,18 @@ "exportDataFormat":{ "shape":"ExportDataFormats", "documentation":"

The file format for the returned export data. Default value is CSV.

" + }, + "filters":{ + "shape":"ExportFilters", + "documentation":"

If a filter is present, it selects the single agentId of the Application Discovery Agent for which data is exported. The agentId can be found in the results of the DescribeAgents API or CLI. If no filter is present, startTime and endTime are ignored and exported data includes both Agentless Discovery Connector data and summary data from Application Discovery agents.

" + }, + "startTime":{ + "shape":"TimeStamp", + "documentation":"

The start timestamp for exported data from the single Application Discovery Agent selected in the filters. If no value is specified, data is exported starting from the first data collected by the agent.

" + }, + "endTime":{ + "shape":"TimeStamp", + "documentation":"

The end timestamp for exported data from the single Application Discovery Agent selected in the filters. If no value is specified, exported data includes the most recent data collected by the agent.

" } } }, @@ -1224,7 +1279,7 @@ "members":{ "exportId":{ "shape":"ConfigurationsExportId", - "documentation":"

A unique identifier used to query the status of an export request.

" + "documentation":"

A unique identifier used to query the status of an export request.

" } } }, diff --git a/services/dms/src/main/resources/codegen-resources/service-2.json b/services/dms/src/main/resources/codegen-resources/service-2.json index 537f52dc494f..44468aecb354 100644 --- a/services/dms/src/main/resources/codegen-resources/service-2.json +++ b/services/dms/src/main/resources/codegen-resources/service-2.json @@ -6,6 +6,7 @@ "jsonVersion":"1.1", "protocol":"json", "serviceFullName":"AWS Database Migration Service", + "serviceId":"Database Migration Service", "signatureVersion":"v4", "targetPrefix":"AmazonDMSv20160101", "uid":"dms-2016-01-01" @@ -343,6 +344,19 @@ ], "documentation":"

Returns information about the replication subnet groups.

" }, + "DescribeReplicationTaskAssessmentResults":{ + "name":"DescribeReplicationTaskAssessmentResults", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeReplicationTaskAssessmentResultsMessage"}, + "output":{"shape":"DescribeReplicationTaskAssessmentResultsResponse"}, + "errors":[ + {"shape":"ResourceNotFoundFault"} + ], + "documentation":"

Returns the task assessment results from Amazon S3. This action always returns the latest results.

" + }, "DescribeReplicationTasks":{ "name":"DescribeReplicationTasks", "http":{ @@ -382,7 +396,7 @@ {"shape":"ResourceNotFoundFault"}, {"shape":"InvalidResourceStateFault"} ], - "documentation":"

Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.

" + "documentation":"

Returns table statistics on the database migration task, including table name, rows inserted, rows updated, and rows deleted.

Note that the \"last updated\" column the DMS console only indicates the time that AWS DMS last updated the table statistics record for a table. It does not indicate the time of the last update to the table.

" }, "ImportCertificate":{ "name":"ImportCertificate", @@ -553,6 +567,20 @@ ], "documentation":"

Starts the replication task.

For more information about AWS DMS tasks, see the AWS DMS user guide at Working with Migration Tasks

" }, + "StartReplicationTaskAssessment":{ + "name":"StartReplicationTaskAssessment", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartReplicationTaskAssessmentMessage"}, + "output":{"shape":"StartReplicationTaskAssessmentResponse"}, + "errors":[ + {"shape":"InvalidResourceStateFault"}, + {"shape":"ResourceNotFoundFault"} + ], + "documentation":"

Starts the replication task assessment for unsupported data types in the source database.

" + }, "StopReplicationTask":{ "name":"StopReplicationTask", "http":{ @@ -616,10 +644,7 @@ }, "AccountQuotaList":{ "type":"list", - "member":{ - "shape":"AccountQuota", - "locationName":"AccountQuota" - } + "member":{"shape":"AccountQuota"} }, "AddTagsToResourceMessage":{ "type":"structure", @@ -720,10 +745,7 @@ }, "CertificateList":{ "type":"list", - "member":{ - "shape":"Certificate", - "locationName":"Certificate" - } + "member":{"shape":"Certificate"} }, "CertificateWallet":{"type":"blob"}, "CompressionTypeValue":{ @@ -765,10 +787,7 @@ }, "ConnectionList":{ "type":"list", - "member":{ - "shape":"Connection", - "locationName":"Connection" - } + "member":{"shape":"Connection"} }, "CreateEndpointMessage":{ "type":"structure", @@ -824,7 +843,7 @@ }, "CertificateArn":{ "shape":"String", - "documentation":"

The Amazon Resource Number (ARN) for the certificate.

" + "documentation":"

The Amazon Resource Name (ARN) for the certificate.

" }, "SslMode":{ "shape":"DmsSslModeValue", @@ -1563,6 +1582,42 @@ }, "documentation":"

" }, + "DescribeReplicationTaskAssessmentResultsMessage":{ + "type":"structure", + "members":{ + "ReplicationTaskArn":{ + "shape":"String", + "documentation":"

- The Amazon Resource Name (ARN) string that uniquely identifies the task. When this input parameter is specified the API will return only one result and ignore the values of the max-records and marker parameters.

" + }, + "MaxRecords":{ + "shape":"IntegerOptional", + "documentation":"

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

Default: 100

Constraints: Minimum 20, maximum 100.

" + }, + "Marker":{ + "shape":"String", + "documentation":"

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" + } + }, + "documentation":"

" + }, + "DescribeReplicationTaskAssessmentResultsResponse":{ + "type":"structure", + "members":{ + "Marker":{ + "shape":"String", + "documentation":"

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" + }, + "BucketName":{ + "shape":"String", + "documentation":"

- The Amazon S3 bucket where the task assessment report is located.

" + }, + "ReplicationTaskAssessmentResults":{ + "shape":"ReplicationTaskAssessmentResultList", + "documentation":"

The task assessment report.

" + } + }, + "documentation":"

" + }, "DescribeReplicationTasksMessage":{ "type":"structure", "members":{ @@ -1638,11 +1693,15 @@ }, "MaxRecords":{ "shape":"IntegerOptional", - "documentation":"

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

Default: 100

Constraints: Minimum 20, maximum 100.

" + "documentation":"

The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.

Default: 100

Constraints: Minimum 20, maximum 500.

" }, "Marker":{ "shape":"String", "documentation":"

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" + }, + "Filters":{ + "shape":"FilterList", + "documentation":"

Filters applied to the describe table statistics action.

Valid filter names: schema-name | table-name | table-state

A combination of filters creates an AND condition where each record matches all specified filters.

" } }, "documentation":"

" @@ -1761,10 +1820,7 @@ }, "EndpointList":{ "type":"list", - "member":{ - "shape":"Endpoint", - "locationName":"Endpoint" - } + "member":{"shape":"Endpoint"} }, "Event":{ "type":"structure", @@ -1794,10 +1850,7 @@ }, "EventCategoriesList":{ "type":"list", - "member":{ - "shape":"String", - "locationName":"EventCategory" - } + "member":{"shape":"String"} }, "EventCategoryGroup":{ "type":"structure", @@ -1815,17 +1868,11 @@ }, "EventCategoryGroupList":{ "type":"list", - "member":{ - "shape":"EventCategoryGroup", - "locationName":"EventCategoryGroup" - } + "member":{"shape":"EventCategoryGroup"} }, "EventList":{ "type":"list", - "member":{ - "shape":"Event", - "locationName":"Event" - } + "member":{"shape":"Event"} }, "EventSubscription":{ "type":"structure", @@ -1871,10 +1918,7 @@ }, "EventSubscriptionsList":{ "type":"list", - "member":{ - "shape":"EventSubscription", - "locationName":"EventSubscription" - } + "member":{"shape":"EventSubscription"} }, "ExceptionMessage":{"type":"string"}, "Filter":{ @@ -1897,17 +1941,11 @@ }, "FilterList":{ "type":"list", - "member":{ - "shape":"Filter", - "locationName":"Filter" - } + "member":{"shape":"Filter"} }, "FilterValueList":{ "type":"list", - "member":{ - "shape":"String", - "locationName":"Value" - } + "member":{"shape":"String"} }, "ImportCertificateMessage":{ "type":"structure", @@ -2070,7 +2108,7 @@ }, "ExtraConnectionAttributes":{ "shape":"String", - "documentation":"

Additional attributes associated with the connection.

" + "documentation":"

Additional attributes associated with the connection. To reset this parameter, pass the empty string (\"\") as an argument.

" }, "CertificateArn":{ "shape":"String", @@ -2369,10 +2407,7 @@ }, "OrderableReplicationInstanceList":{ "type":"list", - "member":{ - "shape":"OrderableReplicationInstance", - "locationName":"OrderableReplicationInstance" - } + "member":{"shape":"OrderableReplicationInstance"} }, "RefreshSchemasMessage":{ "type":"structure", @@ -2587,10 +2622,7 @@ }, "ReplicationInstanceList":{ "type":"list", - "member":{ - "shape":"ReplicationInstance", - "locationName":"ReplicationInstance" - } + "member":{"shape":"ReplicationInstance"} }, "ReplicationInstancePrivateIpAddressList":{ "type":"list", @@ -2661,10 +2693,7 @@ }, "ReplicationSubnetGroups":{ "type":"list", - "member":{ - "shape":"ReplicationSubnetGroup", - "locationName":"ReplicationSubnetGroup" - } + "member":{"shape":"ReplicationSubnetGroup"} }, "ReplicationTask":{ "type":"structure", @@ -2728,12 +2757,47 @@ }, "documentation":"

" }, + "ReplicationTaskAssessmentResult":{ + "type":"structure", + "members":{ + "ReplicationTaskIdentifier":{ + "shape":"String", + "documentation":"

The replication task identifier of the task on which the task assessment was run.

" + }, + "ReplicationTaskArn":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the replication task.

" + }, + "ReplicationTaskLastAssessmentDate":{ + "shape":"TStamp", + "documentation":"

The date the task assessment was completed.

" + }, + "AssessmentStatus":{ + "shape":"String", + "documentation":"

The status of the task assessment.

" + }, + "AssessmentResultsFile":{ + "shape":"String", + "documentation":"

The file containing the results of the task assessment.

" + }, + "AssessmentResults":{ + "shape":"String", + "documentation":"

The task assessment results in JSON format.

" + }, + "S3ObjectUrl":{ + "shape":"String", + "documentation":"

The URL of the S3 object containing the task assessment results.

" + } + }, + "documentation":"

The task assessment report in JSON format.

" + }, + "ReplicationTaskAssessmentResultList":{ + "type":"list", + "member":{"shape":"ReplicationTaskAssessmentResult"} + }, "ReplicationTaskList":{ "type":"list", - "member":{ - "shape":"ReplicationTask", - "locationName":"ReplicationTask" - } + "member":{"shape":"ReplicationTask"} }, "ReplicationTaskStats":{ "type":"structure", @@ -2864,15 +2928,33 @@ }, "SourceIdsList":{ "type":"list", - "member":{ - "shape":"String", - "locationName":"SourceId" - } + "member":{"shape":"String"} }, "SourceType":{ "type":"string", "enum":["replication-instance"] }, + "StartReplicationTaskAssessmentMessage":{ + "type":"structure", + "required":["ReplicationTaskArn"], + "members":{ + "ReplicationTaskArn":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the replication task.

" + } + }, + "documentation":"

" + }, + "StartReplicationTaskAssessmentResponse":{ + "type":"structure", + "members":{ + "ReplicationTask":{ + "shape":"ReplicationTask", + "documentation":"

The assessed replication task.

" + } + }, + "documentation":"

" + }, "StartReplicationTaskMessage":{ "type":"structure", "required":[ @@ -2882,7 +2964,7 @@ "members":{ "ReplicationTaskArn":{ "shape":"String", - "documentation":"

The Amazon Resource Number (ARN) of the replication task to be started.

" + "documentation":"

The Amazon Resource Name (ARN) of the replication task to be started.

" }, "StartReplicationTaskType":{ "shape":"StartReplicationTaskTypeValue", @@ -2919,7 +3001,7 @@ "members":{ "ReplicationTaskArn":{ "shape":"String", - "documentation":"

The Amazon Resource Number(ARN) of the replication task to be stopped.

" + "documentation":"

The Amazon Resource Name(ARN) of the replication task to be stopped.

" } }, "documentation":"

" @@ -2977,17 +3059,11 @@ }, "SubnetIdentifierList":{ "type":"list", - "member":{ - "shape":"String", - "locationName":"SubnetIdentifier" - } + "member":{"shape":"String"} }, "SubnetList":{ "type":"list", - "member":{ - "shape":"Subnet", - "locationName":"Subnet" - } + "member":{"shape":"Subnet"} }, "SupportedEndpointType":{ "type":"structure", @@ -3009,10 +3085,7 @@ }, "SupportedEndpointTypeList":{ "type":"list", - "member":{ - "shape":"SupportedEndpointType", - "locationName":"SupportedEndpointType" - } + "member":{"shape":"SupportedEndpointType"} }, "TStamp":{"type":"timestamp"}, "TableListToReload":{ @@ -3064,7 +3137,23 @@ }, "TableState":{ "shape":"String", - "documentation":"

The state of the table.

" + "documentation":"

The state of the tables described.

Valid states: Table does not exist | Before load | Full load | Table completed | Table cancelled | Table error | Table all | Table updates | Table is being reloaded

" + }, + "ValidationPendingRecords":{ + "shape":"Long", + "documentation":"

The number of records that have yet to be validated.

" + }, + "ValidationFailedRecords":{ + "shape":"Long", + "documentation":"

The number of records that failed validation.

" + }, + "ValidationSuspendedRecords":{ + "shape":"Long", + "documentation":"

The number of records that could not be validated.

" + }, + "ValidationState":{ + "shape":"String", + "documentation":"

The validation state of the table.

The parameter can have the following values

" } }, "documentation":"

" @@ -3103,10 +3192,7 @@ }, "TagList":{ "type":"list", - "member":{ - "shape":"Tag", - "locationName":"Tag" - } + "member":{"shape":"Tag"} }, "TestConnectionMessage":{ "type":"structure", @@ -3149,10 +3235,7 @@ }, "VpcSecurityGroupIdList":{ "type":"list", - "member":{ - "shape":"String", - "locationName":"VpcSecurityGroupId" - } + "member":{"shape":"String"} }, "VpcSecurityGroupMembership":{ "type":"structure", @@ -3170,10 +3253,7 @@ }, "VpcSecurityGroupMembershipList":{ "type":"list", - "member":{ - "shape":"VpcSecurityGroupMembership", - "locationName":"VpcSecurityGroupMembership" - } + "member":{"shape":"VpcSecurityGroupMembership"} } }, "documentation":"AWS Database Migration Service

AWS Database Migration Service (AWS DMS) can migrate your data to and from the most widely used commercial and open-source databases such as Oracle, PostgreSQL, Microsoft SQL Server, Amazon Redshift, MariaDB, Amazon Aurora, MySQL, and SAP Adaptive Server Enterprise (ASE). The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL or SQL Server to PostgreSQL.

For more information about AWS DMS, see the AWS DMS user guide at What Is AWS Database Migration Service?

" diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/examples-1.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/examples-1.json index e66e704b0529..5b6ad0f624ee 100644 --- a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/examples-1.json +++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/examples-1.json @@ -548,13 +548,16 @@ "output": { "Attributes": { "AlbumTitle": { - "S": "Songs About Life" + "S": "Louder Than Ever" }, "Artist": { "S": "Acme Band" }, "SongTitle": { "S": "Happy Day" + }, + "Year": { + "N": "2015" } } }, diff --git a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json index eeeac573a3c7..60a87e48a075 100644 --- a/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json +++ b/services/dynamodb/src/main/resources/codegen-resources/dynamodb/service-2.json @@ -189,7 +189,7 @@ {"shape":"ItemCollectionSizeLimitExceededException"}, {"shape":"InternalServerError"} ], - "documentation":"

Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values.

In addition to putting an item, you can also return the item's attribute values in the same operation, using the ReturnValues parameter.

When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException exception.

To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.

For more information about PutItem, see Working with Items in the Amazon DynamoDB Developer Guide.

" + "documentation":"

Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values. You can return the item's attribute values in the same operation, using the ReturnValues parameter.

This topic provides general information about the PutItem API.

For information on how to call the PutItem API using the AWS SDK in specific languages, see the following:

When you add an item, the primary key attribute(s) are the only required attributes. Attribute values cannot be null. String and Binary type attributes must have lengths greater than zero. Set type attributes cannot be empty. Requests with empty values will be rejected with a ValidationException exception.

To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.

For more information about PutItem, see Working with Items in the Amazon DynamoDB Developer Guide.

" }, "Query":{ "name":"Query", @@ -204,7 +204,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerError"} ], - "documentation":"

A Query operation uses the primary key of a table or a secondary index to directly access items from that table or index.

Use the KeyConditionExpression parameter to provide a specific value for the partition key. The Query operation will return all of the items from the table or index with that partition key value. You can optionally narrow the scope of the Query operation by specifying a sort key value and a comparison operator in KeyConditionExpression. You can use the ScanIndexForward parameter to get results in forward or reverse order, by sort key.

Queries that do not return results consume the minimum number of read capacity units for that type of read operation.

If the total number of items meeting the query criteria exceeds the result set size limit of 1 MB, the query stops and results are returned to the user with the LastEvaluatedKey element to continue the query in a subsequent operation. Unlike a Scan operation, a Query operation never returns both an empty result set and a LastEvaluatedKey value. LastEvaluatedKey is only provided if you have used the Limit parameter, or if the result set exceeds 1 MB (prior to applying a filter).

You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set the ConsistentRead parameter to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.

" + "documentation":"

The Query operation finds items based on primary key values. You can query any table or secondary index that has a composite primary key (a partition key and a sort key).

Use the KeyConditionExpression parameter to provide a specific value for the partition key. The Query operation will return all of the items from the table or index with that partition key value. You can optionally narrow the scope of the Query operation by specifying a sort key value and a comparison operator in KeyConditionExpression. To further refine the Query results, you can optionally provide a FilterExpression. A FilterExpression determines which items within the results should be returned to you. All of the other results are discarded.

A Query operation always returns a result set. If no matching items are found, the result set will be empty. Queries that do not return results consume the minimum number of read capacity units for that type of read operation.

DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that is returned to an application. The number of capacity units consumed will be the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number will also be the same whether or not you use a FilterExpression.

Query results are always sorted by the sort key value. If the data type of the sort key is Number, the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By default, the sort order is ascending. To reverse the order, set the ScanIndexForward parameter to false.

A single Query operation will read up to the maximum number of items set (if using the Limit parameter) or a maximum of 1 MB of data and then apply any filtering to the results using FilterExpression. If LastEvaluatedKey is present in the response, you will need to paginate the result set. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.

FilterExpression is applied after a Query finishes, but before the results are returned. A FilterExpression cannot contain partition key or sort key attributes. You need to specify those attributes in the KeyConditionExpression.

A Query operation can return an empty result set and a LastEvaluatedKey if all the items read for the page of results are filtered out.

You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set the ConsistentRead parameter to true and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specify ConsistentRead when querying a global secondary index.

" }, "Scan":{ "name":"Scan", @@ -219,7 +219,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerError"} ], - "documentation":"

The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression operation.

If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.

By default, Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.

By default, Scan uses eventually consistent reads when accessing the data in a table; therefore, the result set might not include the changes to data in the table immediately before the operation began. If you need a consistent copy of the data, as of the time that the Scan begins, you can set the ConsistentRead parameter to true.

" + "documentation":"

The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide a FilterExpression operation.

If the total number of scanned items exceeds the maximum data set size limit of 1 MB, the scan stops and results are returned to the user as a LastEvaluatedKey value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.

A single Scan operation will read up to the maximum number of items set (if using the Limit parameter) or a maximum of 1 MB of data and then apply any filtering to the results using FilterExpression. If LastEvaluatedKey is present in the response, you will need to paginate the result set. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.

Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.

Scan uses eventually consistent reads when accessing the data in a table; therefore, the result set might not include the changes to data in the table immediately before the operation began. If you need a consistent copy of the data, as of the time that the Scan begins, you can set the ConsistentRead parameter to true.

" }, "TagResource":{ "name":"TagResource", @@ -298,7 +298,7 @@ {"shape":"LimitExceededException"}, {"shape":"InternalServerError"} ], - "documentation":"

Specify the lifetime of individual table items. The database automatically removes the item at the expiration of the item. The UpdateTimeToLive method will enable or disable TTL for the specified table. A successful UpdateTimeToLive call returns the current TimeToLiveSpecification; it may take up to one hour for the change to fully process.

TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.

The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1st, 1970 UTC.

DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.

DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.

As items are deleted, they are removed from any Local Secondary Index and Global Secondary Index immediately in the same eventually consistent way as a standard delete operation.

For more information, see Time To Live in the Amazon DynamoDB Developer Guide.

" + "documentation":"

The UpdateTimeToLive method will enable or disable TTL for the specified table. A successful UpdateTimeToLive call returns the current TimeToLiveSpecification; it may take up to one hour for the change to fully process. Any additional UpdateTimeToLive calls for the same table during this one hour duration result in a ValidationException.

TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.

The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1st, 1970 UTC.

DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.

DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.

As items are deleted, they are removed from any Local Secondary Index and Global Secondary Index immediately in the same eventually consistent way as a standard delete operation.

For more information, see Time To Live in the Amazon DynamoDB Developer Guide.

" } }, "shapes":{ @@ -464,7 +464,7 @@ "members":{ "RequestItems":{ "shape":"BatchWriteItemRequestMap", - "documentation":"

A map of one or more table names and, for each table, a list of operations to be performed (DeleteRequest or PutRequest). Each element in the map consists of the following:

" + "documentation":"

A map of one or more table names and, for each table, a list of operations to be performed (DeleteRequest or PutRequest). Each element in the map consists of the following:

" }, "ReturnConsumedCapacity":{"shape":"ReturnConsumedCapacity"}, "ReturnItemCollectionMetrics":{ @@ -707,7 +707,7 @@ }, "Expected":{ "shape":"ExpectedAttributeMap", - "documentation":"

This is a legacy parameter. Use ConditionExpresssion instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" + "documentation":"

This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" }, "ConditionalOperator":{ "shape":"ConditionalOperator", @@ -1468,7 +1468,7 @@ }, "Expected":{ "shape":"ExpectedAttributeMap", - "documentation":"

This is a legacy parameter. Use ConditionExpresssion instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" + "documentation":"

This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" }, "ReturnValues":{ "shape":"ReturnValue", @@ -2082,7 +2082,7 @@ }, "Expected":{ "shape":"ExpectedAttributeMap", - "documentation":"

This is a legacy parameter. Use ConditionExpresssion instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" + "documentation":"

This is a legacy parameter. Use ConditionExpression instead. For more information, see Expected in the Amazon DynamoDB Developer Guide.

" }, "ConditionalOperator":{ "shape":"ConditionalOperator", @@ -2090,7 +2090,7 @@ }, "ReturnValues":{ "shape":"ReturnValue", - "documentation":"

Use ReturnValues if you want to get the item attributes as they appeared either before or after they were updated. For UpdateItem, the valid values are:

There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No Read Capacity Units are consumed.

Values returned are strongly consistent

" + "documentation":"

Use ReturnValues if you want to get the item attributes as they appear before or after they are updated. For UpdateItem, the valid values are:

There is no additional cost associated with requesting a return value aside from the small network and processing overhead of receiving a larger response. No read capacity units are consumed.

The values returned are strongly consistent.

" }, "ReturnConsumedCapacity":{"shape":"ReturnConsumedCapacity"}, "ReturnItemCollectionMetrics":{ @@ -2121,7 +2121,7 @@ "members":{ "Attributes":{ "shape":"AttributeMap", - "documentation":"

A map of attribute values as they appeared before the UpdateItem operation. This map only appears if ReturnValues was specified as something other than NONE in the request. Each element represents one attribute.

" + "documentation":"

A map of attribute values as they appear before or after the UpdateItem operation, as determined by the ReturnValues parameter.

The Attributes map is only present if ReturnValues was specified as something other than NONE in the request. Each element represents one attribute.

" }, "ConsumedCapacity":{ "shape":"ConsumedCapacity", diff --git a/services/ec2/src/main/resources/codegen-resources/customization.config b/services/ec2/src/main/resources/codegen-resources/customization.config index 22e77537995d..3a7e76e8c4eb 100644 --- a/services/ec2/src/main/resources/codegen-resources/customization.config +++ b/services/ec2/src/main/resources/codegen-resources/customization.config @@ -625,6 +625,51 @@ } ] }, + "DeleteFpgaImageResult": { + "modify": [ + { + "Return": { + "emitPropertyName": "ReturnValue" + } + } + ] + }, + "DeleteNetworkInterfacePermissionResult": { + "modify": [ + { + "Return": { + "emitPropertyName": "ReturnValue" + } + } + ] + }, + "ResetFpgaImageAttributeResult": { + "modify": [ + { + "Return": { + "emitPropertyName": "ReturnValue" + } + } + ] + }, + "UpdateSecurityGroupRuleDescriptionsEgressResult": { + "modify": [ + { + "Return": { + "emitPropertyName": "ReturnValue" + } + } + ] + }, + "UpdateSecurityGroupRuleDescriptionsIngressResult": { + "modify": [ + { + "Return": { + "emitPropertyName": "ReturnValue" + } + } + ] + }, "Image": { "modify": [ { @@ -633,8 +678,16 @@ } } ] + }, + "FpgaImage": { + "modify": [ + { + "Public": { + "emitPropertyName": "isPublic" + } + } + ] } - }, "blacklistedSimpleMethods" : [ "acceptVpcPeeringConnection", @@ -657,6 +710,7 @@ "deleteSpotDatafeedSubscription", "describeFpgaImages", "describeReservedInstancesListings", - "describeSpotDatafeedSubscription" + "describeSpotDatafeedSubscription", + "createDefaultVpc" ] } diff --git a/services/ec2/src/main/resources/codegen-resources/service-2.json b/services/ec2/src/main/resources/codegen-resources/service-2.json index 7c30780aa639..cdae56de9dc0 100644 --- a/services/ec2/src/main/resources/codegen-resources/service-2.json +++ b/services/ec2/src/main/resources/codegen-resources/service-2.json @@ -39,7 +39,7 @@ }, "input":{"shape":"AllocateAddressRequest"}, "output":{"shape":"AllocateAddressResult"}, - "documentation":"

Acquires an Elastic IP address.

An Elastic IP address is for use either in the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Allocates an Elastic IP address.

An Elastic IP address is for use either in the EC2-Classic platform or in a VPC. By default, you can allocate 5 Elastic IP addresses for EC2-Classic per region and 5 Elastic IP addresses for EC2-VPC per region.

If you release an Elastic IP address for use in a VPC, you might be able to recover it. To recover an Elastic IP address that you released, specify it in the Address parameter. Note that you cannot recover an Elastic IP address that you released after it is allocated to another AWS account.

For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.

" }, "AllocateHosts":{ "name":"AllocateHosts", @@ -127,7 +127,7 @@ }, "input":{"shape":"AssociateVpcCidrBlockRequest"}, "output":{"shape":"AssociateVpcCidrBlockResult"}, - "documentation":"

Associates a CIDR block with your VPC. You can only associate a single Amazon-provided IPv6 CIDR block with your VPC. The IPv6 CIDR block size is fixed at /56.

" + "documentation":"

Associates a CIDR block with your VPC. You can associate a secondary IPv4 CIDR block, or you can associate an Amazon-provided IPv6 CIDR block. The IPv6 CIDR block size is fixed at /56.

For more information about associating CIDR blocks with your VPC and applicable restrictions, see VPC and Subnet Sizing in the Amazon Virtual Private Cloud User Guide.

" }, "AttachClassicLinkVpc":{ "name":"AttachClassicLinkVpc", @@ -176,7 +176,7 @@ }, "input":{"shape":"AttachVpnGatewayRequest"}, "output":{"shape":"AttachVpnGatewayResult"}, - "documentation":"

Attaches a virtual private gateway to a VPC. You can attach one virtual private gateway to one VPC at a time.

For more information, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Attaches a virtual private gateway to a VPC. You can attach one virtual private gateway to one VPC at a time.

For more information, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "AuthorizeSecurityGroupEgress":{ "name":"AuthorizeSecurityGroupEgress", @@ -185,7 +185,7 @@ "requestUri":"/" }, "input":{"shape":"AuthorizeSecurityGroupEgressRequest"}, - "documentation":"

[EC2-VPC only] Adds one or more egress rules to a security group for use with a VPC. Specifically, this action permits instances to send traffic to one or more destination IPv4 or IPv6 CIDR address ranges, or to one or more destination security groups for the same VPC. This action doesn't apply to security groups for use in EC2-Classic. For more information, see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide. For more information about security group limits, see Amazon VPC Limits.

Each rule consists of the protocol (for example, TCP), plus either a CIDR range or a source group. For the TCP and UDP protocols, you must also specify the destination port or port range. For the ICMP protocol, you must also specify the ICMP type and code. You can use -1 for the type or code to mean all types or all codes.

Rule changes are propagated to affected instances as quickly as possible. However, a small delay might occur.

" + "documentation":"

[EC2-VPC only] Adds one or more egress rules to a security group for use with a VPC. Specifically, this action permits instances to send traffic to one or more destination IPv4 or IPv6 CIDR address ranges, or to one or more destination security groups for the same VPC. This action doesn't apply to security groups for use in EC2-Classic. For more information, see Security Groups for Your VPC in the Amazon Virtual Private Cloud User Guide. For more information about security group limits, see Amazon VPC Limits.

Each rule consists of the protocol (for example, TCP), plus either a CIDR range or a source group. For the TCP and UDP protocols, you must also specify the destination port or port range. For the ICMP protocol, you must also specify the ICMP type and code. You can use -1 for the type or code to mean all types or all codes. You can optionally specify a description for the rule.

Rule changes are propagated to affected instances as quickly as possible. However, a small delay might occur.

" }, "AuthorizeSecurityGroupIngress":{ "name":"AuthorizeSecurityGroupIngress", @@ -194,7 +194,7 @@ "requestUri":"/" }, "input":{"shape":"AuthorizeSecurityGroupIngressRequest"}, - "documentation":"

Adds one or more ingress rules to a security group.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

[EC2-Classic] This action gives one or more IPv4 CIDR address ranges permission to access a security group in your account, or gives one or more security groups (called the source groups) permission to access a security group for your account. A source group can be for your own AWS account, or another. You can have up to 100 rules per group.

[EC2-VPC] This action gives one or more IPv4 or IPv6 CIDR address ranges permission to access a security group in your VPC, or gives one or more other security groups (called the source groups) permission to access a security group for your VPC. The security groups must all be for the same VPC or a peer VPC in a VPC peering connection. For more information about VPC security group limits, see Amazon VPC Limits.

" + "documentation":"

Adds one or more ingress rules to a security group.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

[EC2-Classic] This action gives one or more IPv4 CIDR address ranges permission to access a security group in your account, or gives one or more security groups (called the source groups) permission to access a security group for your account. A source group can be for your own AWS account, or another. You can have up to 100 rules per group.

[EC2-VPC] This action gives one or more IPv4 or IPv6 CIDR address ranges permission to access a security group in your VPC, or gives one or more other security groups (called the source groups) permission to access a security group for your VPC. The security groups must all be for the same VPC or a peer VPC in a VPC peering connection. For more information about VPC security group limits, see Amazon VPC Limits.

You can optionally specify a description for the security group rule.

" }, "BundleInstance":{ "name":"BundleInstance", @@ -282,7 +282,17 @@ }, "input":{"shape":"ConfirmProductInstanceRequest"}, "output":{"shape":"ConfirmProductInstanceResult"}, - "documentation":"

Determines whether a product code is associated with an instance. This action can only be used by the owner of the product code. It is useful when a product code owner needs to verify whether another user's instance is eligible for support.

" + "documentation":"

Determines whether a product code is associated with an instance. This action can only be used by the owner of the product code. It is useful when a product code owner must verify whether another user's instance is eligible for support.

" + }, + "CopyFpgaImage":{ + "name":"CopyFpgaImage", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CopyFpgaImageRequest"}, + "output":{"shape":"CopyFpgaImageResult"}, + "documentation":"

Copies the specified Amazon FPGA Image (AFI) to the current region.

" }, "CopyImage":{ "name":"CopyImage", @@ -312,7 +322,27 @@ }, "input":{"shape":"CreateCustomerGatewayRequest"}, "output":{"shape":"CreateCustomerGatewayResult"}, - "documentation":"

Provides information to AWS about your VPN customer gateway device. The customer gateway is the appliance at your end of the VPN connection. (The device on the AWS side of the VPN connection is the virtual private gateway.) You must provide the Internet-routable IP address of the customer gateway's external interface. The IP address must be static and may be behind a device performing network address translation (NAT).

For devices that use Border Gateway Protocol (BGP), you can also provide the device's BGP Autonomous System Number (ASN). You can use an existing ASN assigned to your network. If you don't have an ASN already, you can use a private ASN (in the 64512 - 65534 range).

Amazon EC2 supports all 2-byte ASN numbers in the range of 1 - 65534, with the exception of 7224, which is reserved in the us-east-1 region, and 9059, which is reserved in the eu-west-1 region.

For more information about VPN customer gateways, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

You cannot create more than one customer gateway with the same VPN type, IP address, and BGP ASN parameter values. If you run an identical request more than one time, the first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent requests do not create new customer gateway resources.

" + "documentation":"

Provides information to AWS about your VPN customer gateway device. The customer gateway is the appliance at your end of the VPN connection. (The device on the AWS side of the VPN connection is the virtual private gateway.) You must provide the Internet-routable IP address of the customer gateway's external interface. The IP address must be static and may be behind a device performing network address translation (NAT).

For devices that use Border Gateway Protocol (BGP), you can also provide the device's BGP Autonomous System Number (ASN). You can use an existing ASN assigned to your network. If you don't have an ASN already, you can use a private ASN (in the 64512 - 65534 range).

Amazon EC2 supports all 2-byte ASN numbers in the range of 1 - 65534, with the exception of 7224, which is reserved in the us-east-1 region, and 9059, which is reserved in the eu-west-1 region.

For more information about VPN customer gateways, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

You cannot create more than one customer gateway with the same VPN type, IP address, and BGP ASN parameter values. If you run an identical request more than one time, the first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent requests do not create new customer gateway resources.

" + }, + "CreateDefaultSubnet":{ + "name":"CreateDefaultSubnet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDefaultSubnetRequest"}, + "output":{"shape":"CreateDefaultSubnetResult"}, + "documentation":"

Creates a default subnet with a size /20 IPv4 CIDR block in the specified Availability Zone in your default VPC. You can have only one default subnet per Availability Zone. For more information, see Creating a Default Subnet in the Amazon Virtual Private Cloud User Guide.

" + }, + "CreateDefaultVpc":{ + "name":"CreateDefaultVpc", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDefaultVpcRequest"}, + "output":{"shape":"CreateDefaultVpcResult"}, + "documentation":"

Creates a default VPC with a size /16 IPv4 CIDR block and a default subnet in each Availability Zone. For more information about the components of a default VPC, see Default VPC and Default Subnets in the Amazon Virtual Private Cloud User Guide. You cannot specify the components of the default VPC yourself.

You can create a default VPC if you deleted your previous default VPC. You cannot have more than one default VPC per region.

If your account supports EC2-Classic, you cannot use this action to create a default VPC in a region that supports EC2-Classic. If you want a default VPC in a region that supports EC2-Classic, see \"I really want a default VPC for my existing EC2 account. Is that possible?\" in the Default VPCs FAQ.

" }, "CreateDhcpOptions":{ "name":"CreateDhcpOptions", @@ -433,6 +463,16 @@ "output":{"shape":"CreateNetworkInterfaceResult"}, "documentation":"

Creates a network interface in the specified subnet.

For more information about network interfaces, see Elastic Network Interfaces in the Amazon Virtual Private Cloud User Guide.

" }, + "CreateNetworkInterfacePermission":{ + "name":"CreateNetworkInterfacePermission", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateNetworkInterfacePermissionRequest"}, + "output":{"shape":"CreateNetworkInterfacePermissionResult"}, + "documentation":"

Grants an AWS authorized partner account permission to attach the specified network interface to an instance in their account.

You can grant permission to a single AWS account only, and only one account at a time.

" + }, "CreatePlacementGroup":{ "name":"CreatePlacementGroup", "http":{ @@ -440,7 +480,7 @@ "requestUri":"/" }, "input":{"shape":"CreatePlacementGroupRequest"}, - "documentation":"

Creates a placement group that you launch cluster instances into. You must give the group a name that's unique within the scope of your account.

For more information about placement groups and cluster instances, see Cluster Instances in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Creates a placement group that you launch cluster instances into. Give the group a name that's unique within the scope of your account.

For more information about placement groups and cluster instances, see Cluster Instances in the Amazon Elastic Compute Cloud User Guide.

" }, "CreateReservedInstancesListing":{ "name":"CreateReservedInstancesListing", @@ -510,7 +550,7 @@ }, "input":{"shape":"CreateSubnetRequest"}, "output":{"shape":"CreateSubnetResult"}, - "documentation":"

Creates a subnet in an existing VPC.

When you create each subnet, you provide the VPC ID and the CIDR block you want for the subnet. After you create a subnet, you can't change its CIDR block. The subnet's IPv4 CIDR block can be the same as the VPC's IPv4 CIDR block (assuming you want only a single subnet in the VPC), or a subset of the VPC's IPv4 CIDR block. If you create more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. The smallest IPv4 subnet (and VPC) you can create uses a /28 netmask (16 IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses).

If you've associated an IPv6 CIDR block with your VPC, you can create a subnet with an IPv6 CIDR block that uses a /64 prefix length.

AWS reserves both the first four and the last IPv4 address in each subnet's CIDR block. They're not available for use.

If you add more than one subnet to a VPC, they're set up in a star topology with a logical router in the middle.

If you launch an instance in a VPC using an Amazon EBS-backed AMI, the IP address doesn't change if you stop and restart the instance (unlike a similar instance launched outside a VPC, which gets a new IP address when restarted). It's therefore possible to have a subnet with no running instances (they're all stopped), but no remaining IP addresses available.

For more information about subnets, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Creates a subnet in an existing VPC.

When you create each subnet, you provide the VPC ID and the IPv4 CIDR block you want for the subnet. After you create a subnet, you can't change its CIDR block. The size of the subnet's IPv4 CIDR block can be the same as a VPC's IPv4 CIDR block, or a subset of a VPC's IPv4 CIDR block. If you create more than one subnet in a VPC, the subnets' CIDR blocks must not overlap. The smallest IPv4 subnet (and VPC) you can create uses a /28 netmask (16 IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses).

If you've associated an IPv6 CIDR block with your VPC, you can create a subnet with an IPv6 CIDR block that uses a /64 prefix length.

AWS reserves both the first four and the last IPv4 address in each subnet's CIDR block. They're not available for use.

If you add more than one subnet to a VPC, they're set up in a star topology with a logical router in the middle.

If you launch an instance in a VPC using an Amazon EBS-backed AMI, the IP address doesn't change if you stop and restart the instance (unlike a similar instance launched outside a VPC, which gets a new IP address when restarted). It's therefore possible to have a subnet with no running instances (they're all stopped), but no remaining IP addresses available.

For more information about subnets, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.

" }, "CreateTags":{ "name":"CreateTags", @@ -549,7 +589,7 @@ }, "input":{"shape":"CreateVpcEndpointRequest"}, "output":{"shape":"CreateVpcEndpointResult"}, - "documentation":"

Creates a VPC endpoint for a specified AWS service. An endpoint enables you to create a private connection between your VPC and another AWS service in your account. You can specify an endpoint policy to attach to the endpoint that will control access to the service from your VPC. You can also specify the VPC route tables that use the endpoint.

Use DescribeVpcEndpointServices to get a list of supported AWS services.

" + "documentation":"

Creates a VPC endpoint for a specified AWS service. An endpoint enables you to create a private connection between your VPC and another AWS service in your account. You can create a gateway endpoint or an interface endpoint.

A gateway endpoint serves as a target for a route in your route table for traffic destined for the AWS service. You can specify the VPC route tables that use the endpoint, and you can optionally specify an endpoint policy to attach to the endpoint that will control access to the service from your VPC.

An interface endpoint is a network interface in your subnet with a private IP address that serves as an entry point for traffic destined to the AWS service. You can specify the subnets in which to create an endpoint, and the security groups to associate with the network interface.

" }, "CreateVpcPeeringConnection":{ "name":"CreateVpcPeeringConnection", @@ -569,7 +609,7 @@ }, "input":{"shape":"CreateVpnConnectionRequest"}, "output":{"shape":"CreateVpnConnectionResult"}, - "documentation":"

Creates a VPN connection between an existing virtual private gateway and a VPN customer gateway. The only supported connection type is ipsec.1.

The response includes information that you need to give to your network administrator to configure your customer gateway.

We strongly recommend that you use HTTPS when calling this operation because the response contains sensitive cryptographic information for configuring your customer gateway.

If you decide to shut down your VPN connection for any reason and later create a new VPN connection, you must reconfigure your customer gateway with the new information returned from this call.

This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Creates a VPN connection between an existing virtual private gateway and a VPN customer gateway. The only supported connection type is ipsec.1.

The response includes information that you need to give to your network administrator to configure your customer gateway.

We strongly recommend that you use HTTPS when calling this operation because the response contains sensitive cryptographic information for configuring your customer gateway.

If you decide to shut down your VPN connection for any reason and later create a new VPN connection, you must reconfigure your customer gateway with the new information returned from this call.

This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error.

For more information, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "CreateVpnConnectionRoute":{ "name":"CreateVpnConnectionRoute", @@ -578,7 +618,7 @@ "requestUri":"/" }, "input":{"shape":"CreateVpnConnectionRouteRequest"}, - "documentation":"

Creates a static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Creates a static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway.

For more information about VPN connections, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "CreateVpnGateway":{ "name":"CreateVpnGateway", @@ -588,7 +628,7 @@ }, "input":{"shape":"CreateVpnGatewayRequest"}, "output":{"shape":"CreateVpnGatewayResult"}, - "documentation":"

Creates a virtual private gateway. A virtual private gateway is the endpoint on the VPC side of your VPN connection. You can create a virtual private gateway before creating the VPC itself.

For more information about virtual private gateways, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Creates a virtual private gateway. A virtual private gateway is the endpoint on the VPC side of your VPN connection. You can create a virtual private gateway before creating the VPC itself.

For more information about virtual private gateways, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "DeleteCustomerGateway":{ "name":"DeleteCustomerGateway", @@ -628,6 +668,16 @@ "output":{"shape":"DeleteFlowLogsResult"}, "documentation":"

Deletes one or more flow logs.

" }, + "DeleteFpgaImage":{ + "name":"DeleteFpgaImage", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteFpgaImageRequest"}, + "output":{"shape":"DeleteFpgaImageResult"}, + "documentation":"

Deletes the specified Amazon FPGA Image (AFI).

" + }, "DeleteInternetGateway":{ "name":"DeleteInternetGateway", "http":{ @@ -683,6 +733,16 @@ "input":{"shape":"DeleteNetworkInterfaceRequest"}, "documentation":"

Deletes the specified network interface. You must detach the network interface before you can delete it.

" }, + "DeleteNetworkInterfacePermission":{ + "name":"DeleteNetworkInterfacePermission", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteNetworkInterfacePermissionRequest"}, + "output":{"shape":"DeleteNetworkInterfacePermissionResult"}, + "documentation":"

Deletes a permission for a network interface. By default, you cannot delete the permission if the account for which you're removing the permission has attached the network interface to an instance. However, you can force delete the permission, regardless of any attachment.

" + }, "DeletePlacementGroup":{ "name":"DeletePlacementGroup", "http":{ @@ -753,7 +813,7 @@ "requestUri":"/" }, "input":{"shape":"DeleteTagsRequest"}, - "documentation":"

Deletes the specified set of tags from the specified set of resources. This call is designed to follow a DescribeTags request.

For more information about tags, see Tagging Your Resources in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Deletes the specified set of tags from the specified set of resources.

To list the current tags, use DescribeTags. For more information about tags, see Tagging Your Resources in the Amazon Elastic Compute Cloud User Guide.

" }, "DeleteVolume":{ "name":"DeleteVolume", @@ -781,7 +841,7 @@ }, "input":{"shape":"DeleteVpcEndpointsRequest"}, "output":{"shape":"DeleteVpcEndpointsResult"}, - "documentation":"

Deletes one or more specified VPC endpoints. Deleting the endpoint also deletes the endpoint routes in the route tables that were associated with the endpoint.

" + "documentation":"

Deletes one or more specified VPC endpoints. Deleting a gateway endpoint also deletes the endpoint routes in the route tables that were associated with the endpoint. Deleting an interface endpoint deletes the endpoint network interfaces.

" }, "DeleteVpcPeeringConnection":{ "name":"DeleteVpcPeeringConnection", @@ -827,7 +887,7 @@ "requestUri":"/" }, "input":{"shape":"DeregisterImageRequest"}, - "documentation":"

Deregisters the specified AMI. After you deregister an AMI, it can't be used to launch new instances.

This command does not delete the AMI.

" + "documentation":"

Deregisters the specified AMI. After you deregister an AMI, it can't be used to launch new instances; however, it doesn't affect any instances that you've already launched from the AMI. You'll continue to incur usage costs for those instances until you terminate them.

When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was created for the root volume of the instance during the AMI creation process. When you deregister an instance store-backed AMI, it doesn't affect the files that you uploaded to Amazon S3 when you created the AMI.

" }, "DescribeAccountAttributes":{ "name":"DescribeAccountAttributes", @@ -897,7 +957,7 @@ }, "input":{"shape":"DescribeCustomerGatewaysRequest"}, "output":{"shape":"DescribeCustomerGatewaysResult"}, - "documentation":"

Describes one or more of your VPN customer gateways.

For more information about VPN customer gateways, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Describes one or more of your VPN customer gateways.

For more information about VPN customer gateways, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "DescribeDhcpOptions":{ "name":"DescribeDhcpOptions", @@ -919,6 +979,16 @@ "output":{"shape":"DescribeEgressOnlyInternetGatewaysResult"}, "documentation":"

Describes one or more of your egress-only Internet gateways.

" }, + "DescribeElasticGpus":{ + "name":"DescribeElasticGpus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeElasticGpusRequest"}, + "output":{"shape":"DescribeElasticGpusResult"}, + "documentation":"

Describes the Elastic GPUs associated with your instances. For more information about Elastic GPUs, see Amazon EC2 Elastic GPUs.

" + }, "DescribeExportTasks":{ "name":"DescribeExportTasks", "http":{ @@ -939,6 +1009,16 @@ "output":{"shape":"DescribeFlowLogsResult"}, "documentation":"

Describes one or more flow logs. To view the information in your flow logs (the log streams for the network interfaces), you must use the CloudWatch Logs console or the CloudWatch Logs API.

" }, + "DescribeFpgaImageAttribute":{ + "name":"DescribeFpgaImageAttribute", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeFpgaImageAttributeRequest"}, + "output":{"shape":"DescribeFpgaImageAttributeResult"}, + "documentation":"

Describes the specified attribute of the specified Amazon FPGA Image (AFI).

" + }, "DescribeFpgaImages":{ "name":"DescribeFpgaImages", "http":{ @@ -1067,7 +1147,7 @@ }, "input":{"shape":"DescribeInstanceStatusRequest"}, "output":{"shape":"DescribeInstanceStatusResult"}, - "documentation":"

Describes the status of one or more instances. By default, only running instances are described, unless specified otherwise.

Instance status includes the following components:

" + "documentation":"

Describes the status of one or more instances. By default, only running instances are described, unless you specifically indicate to return the status of all instances.

Instance status includes the following components:

" }, "DescribeInstances":{ "name":"DescribeInstances", @@ -1139,6 +1219,16 @@ "output":{"shape":"DescribeNetworkInterfaceAttributeResult"}, "documentation":"

Describes a network interface attribute. You can specify only one attribute at a time.

" }, + "DescribeNetworkInterfacePermissions":{ + "name":"DescribeNetworkInterfacePermissions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeNetworkInterfacePermissionsRequest"}, + "output":{"shape":"DescribeNetworkInterfacePermissionsResult"}, + "documentation":"

Describes the permissions for your network interfaces.

" + }, "DescribeNetworkInterfaces":{ "name":"DescribeNetworkInterfaces", "http":{ @@ -1497,7 +1587,7 @@ }, "input":{"shape":"DescribeVpnConnectionsRequest"}, "output":{"shape":"DescribeVpnConnectionsResult"}, - "documentation":"

Describes one or more of your VPN connections.

For more information about VPN connections, see Adding a Hardware Virtual Private Gateway to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Describes one or more of your VPN connections.

For more information about VPN connections, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "DescribeVpnGateways":{ "name":"DescribeVpnGateways", @@ -1507,7 +1597,7 @@ }, "input":{"shape":"DescribeVpnGatewaysRequest"}, "output":{"shape":"DescribeVpnGatewaysResult"}, - "documentation":"

Describes one or more of your virtual private gateways.

For more information about virtual private gateways, see Adding an IPsec Hardware VPN to Your VPC in the Amazon Virtual Private Cloud User Guide.

" + "documentation":"

Describes one or more of your virtual private gateways.

For more information about virtual private gateways, see AWS Managed VPN Connections in the Amazon Virtual Private Cloud User Guide.

" }, "DetachClassicLinkVpc":{ "name":"DetachClassicLinkVpc", @@ -1631,7 +1721,7 @@ }, "input":{"shape":"DisassociateVpcCidrBlockRequest"}, "output":{"shape":"DisassociateVpcCidrBlockResult"}, - "documentation":"

Disassociates a CIDR block from a VPC. Currently, you can disassociate an IPv6 CIDR block only. You must detach or delete all gateways and resources that are associated with the CIDR block before you can disassociate it.

" + "documentation":"

Disassociates a CIDR block from a VPC. To disassociate the CIDR block, you must specify its association ID. You can get the association ID by using DescribeVpcs. You must detach or delete all gateways and resources that are associated with the CIDR block before you can disassociate it.

You cannot disassociate the CIDR block with which you originally created the VPC (the primary CIDR block).

" }, "EnableVgwRoutePropagation":{ "name":"EnableVgwRoutePropagation", @@ -1679,7 +1769,7 @@ }, "input":{"shape":"GetConsoleOutputRequest"}, "output":{"shape":"GetConsoleOutputResult"}, - "documentation":"

Gets the console output for the specified instance.

Instances do not have a physical monitor through which you can view their console output. They also lack physical controls that allow you to power up, reboot, or shut them down. To allow these actions, we provide them through the Amazon EC2 API and command line interface.

Instance console output is buffered and posted shortly after instance boot, reboot, and termination. Amazon EC2 preserves the most recent 64 KB output which is available for at least one hour after the most recent post.

For Linux instances, the instance console output displays the exact console output that would normally be displayed on a physical monitor attached to a computer. This output is buffered because the instance produces it and then posts it to a store where the instance's owner can retrieve it.

For Windows instances, the instance console output includes output from the EC2Config service.

" + "documentation":"

Gets the console output for the specified instance.

Instances do not have a physical monitor through which you can view their console output. They also lack physical controls that allow you to power up, reboot, or shut them down. To allow these actions, we provide them through the Amazon EC2 API and command line interface.

Instance console output is buffered and posted shortly after instance boot, reboot, and termination. Amazon EC2 preserves the most recent 64 KB output, which is available for at least one hour after the most recent post.

For Linux instances, the instance console output displays the exact console output that would normally be displayed on a physical monitor attached to a computer. This output is buffered because the instance produces it and then posts it to a store where the instance's owner can retrieve it.

For Windows instances, the instance console output includes output from the EC2Config service.

" }, "GetConsoleScreenshot":{ "name":"GetConsoleScreenshot", @@ -1709,7 +1799,7 @@ }, "input":{"shape":"GetPasswordDataRequest"}, "output":{"shape":"GetPasswordDataResult"}, - "documentation":"

Retrieves the encrypted administrator password for an instance running Windows.

The Windows password is generated at boot if the EC2Config service plugin, Ec2SetPassword, is enabled. This usually only happens the first time an AMI is launched, and then Ec2SetPassword is automatically disabled. The password is not generated for rebundled AMIs unless Ec2SetPassword is enabled before bundling.

The password is encrypted using the key pair that you specified when you launched the instance. You must provide the corresponding key pair file.

Password generation and encryption takes a few moments. We recommend that you wait up to 15 minutes after launching an instance before trying to retrieve the generated password.

" + "documentation":"

Retrieves the encrypted administrator password for a running Windows instance.

The Windows password is generated at boot by the EC2Config service or EC2Launch scripts (Windows Server 2016 and later). This usually only happens the first time an instance is launched. For more information, see EC2Config and EC2Launch in the Amazon Elastic Compute Cloud User Guide.

For the EC2Config service, the password is not generated for rebundled AMIs unless Ec2SetPassword is enabled before bundling.

The password is encrypted using the key pair that you specified when you launched the instance. You must provide the corresponding key pair file.

When you launch an instance, password generation and encryption may take a few minutes. If you try to retrieve the password before it's available, the output returns an empty string. We recommend that you wait up to 15 minutes after launching an instance before trying to retrieve the generated password.

" }, "GetReservedInstancesExchangeQuote":{ "name":"GetReservedInstancesExchangeQuote", @@ -1719,7 +1809,7 @@ }, "input":{"shape":"GetReservedInstancesExchangeQuoteRequest"}, "output":{"shape":"GetReservedInstancesExchangeQuoteResult"}, - "documentation":"

Returns details about the values and term of your specified Convertible Reserved Instances. When a target configuration is specified, it returns information about whether the exchange is valid and can be performed.

" + "documentation":"

Returns a quote and exchange information for exchanging one or more specified Convertible Reserved Instances for a new Convertible Reserved Instance. If the exchange cannot be performed, the reason is returned in the response. Use AcceptReservedInstancesExchangeQuote to perform the exchange.

" }, "ImportImage":{ "name":"ImportImage", @@ -1771,6 +1861,16 @@ "output":{"shape":"ImportVolumeResult"}, "documentation":"

Creates an import volume task using metadata from the specified disk image.For more information, see Importing Disks to Amazon EBS.

For information about the import manifest referenced by this API action, see VM Import Manifest.

" }, + "ModifyFpgaImageAttribute":{ + "name":"ModifyFpgaImageAttribute", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ModifyFpgaImageAttributeRequest"}, + "output":{"shape":"ModifyFpgaImageAttributeResult"}, + "documentation":"

Modifies the specified attribute of the specified Amazon FPGA Image (AFI).

" + }, "ModifyHosts":{ "name":"ModifyHosts", "http":{ @@ -1806,7 +1906,7 @@ "requestUri":"/" }, "input":{"shape":"ModifyImageAttributeRequest"}, - "documentation":"

Modifies the specified attribute of the specified AMI. You can specify only one attribute at a time.

AWS Marketplace product codes cannot be modified. Images with an AWS Marketplace product code cannot be made public.

The SriovNetSupport enhanced networking attribute cannot be changed using this command. Instead, enable SriovNetSupport on an instance and create an AMI from the instance. This will result in an image with SriovNetSupport enabled.

" + "documentation":"

Modifies the specified attribute of the specified AMI. You can specify only one attribute at a time. You can use the Attribute parameter to specify the attribute or one of the following parameters: Description, LaunchPermission, or ProductCode.

AWS Marketplace product codes cannot be modified. Images with an AWS Marketplace product code cannot be made public.

To enable the SriovNetSupport enhanced networking attribute of an image, enable SriovNetSupport on an instance and create an AMI from the instance.

" }, "ModifyInstanceAttribute":{ "name":"ModifyInstanceAttribute", @@ -1844,7 +1944,7 @@ }, "input":{"shape":"ModifyReservedInstancesRequest"}, "output":{"shape":"ModifyReservedInstancesResult"}, - "documentation":"

Modifies the Availability Zone, instance count, instance type, or network platform (EC2-Classic or EC2-VPC) of your Standard Reserved Instances. The Reserved Instances to be modified must be identical, except for Availability Zone, network platform, and instance type.

For more information, see Modifying Reserved Instances in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Modifies the Availability Zone, instance count, instance type, or network platform (EC2-Classic or EC2-VPC) of your Reserved Instances. The Reserved Instances to be modified must be identical, except for Availability Zone, network platform, and instance type.

For more information, see Modifying Reserved Instances in the Amazon Elastic Compute Cloud User Guide.

" }, "ModifySnapshotAttribute":{ "name":"ModifySnapshotAttribute", @@ -1910,7 +2010,7 @@ }, "input":{"shape":"ModifyVpcEndpointRequest"}, "output":{"shape":"ModifyVpcEndpointResult"}, - "documentation":"

Modifies attributes of a specified VPC endpoint. You can modify the policy associated with the endpoint, and you can add and remove route tables associated with the endpoint.

" + "documentation":"

Modifies attributes of a specified VPC endpoint. The attributes that you can modify depend on the type of VPC endpoint (interface or gateway). For more information, see VPC Endpoints in the Amazon Virtual Private Cloud User Guide.

" }, "ModifyVpcPeeringConnectionOptions":{ "name":"ModifyVpcPeeringConnectionOptions", @@ -1922,6 +2022,16 @@ "output":{"shape":"ModifyVpcPeeringConnectionOptionsResult"}, "documentation":"

Modifies the VPC peering connection options on one side of a VPC peering connection. You can do the following:

If the peered VPCs are in different accounts, each owner must initiate a separate request to modify the peering connection options, depending on whether their VPC was the requester or accepter for the VPC peering connection. If the peered VPCs are in the same account, you can modify the requester and accepter options in the same request. To confirm which VPC is the accepter and requester for a VPC peering connection, use the DescribeVpcPeeringConnections command.

" }, + "ModifyVpcTenancy":{ + "name":"ModifyVpcTenancy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ModifyVpcTenancyRequest"}, + "output":{"shape":"ModifyVpcTenancyResult"}, + "documentation":"

Modifies the instance tenancy attribute of the specified VPC. You can change the instance tenancy attribute of a VPC to default only. You cannot change the instance tenancy attribute to dedicated.

After you modify the tenancy of the VPC, any new instances that you launch into the VPC have a tenancy of default, unless you specify otherwise during launch. The tenancy of any existing instances in the VPC is not affected.

For more information about Dedicated Instances, see Dedicated Instances in the Amazon Elastic Compute Cloud User Guide.

" + }, "MonitorInstances":{ "name":"MonitorInstances", "http":{ @@ -2008,7 +2118,7 @@ "requestUri":"/" }, "input":{"shape":"ReleaseAddressRequest"}, - "documentation":"

Releases the specified Elastic IP address.

After releasing an Elastic IP address, it is released to the IP address pool and might be unavailable to you. Be sure to update your DNS records and any servers or devices that communicate with the address. If you attempt to release an Elastic IP address that you already released, you'll get an AuthFailure error if the address is already allocated to another AWS account.

[EC2-Classic, default VPC] Releasing an Elastic IP address automatically disassociates it from any instance that it's associated with. To disassociate an Elastic IP address without releasing it, use DisassociateAddress.

[Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic IP address before you try to release it. Otherwise, Amazon EC2 returns an error (InvalidIPAddress.InUse).

" + "documentation":"

Releases the specified Elastic IP address.

[EC2-Classic, default VPC] Releasing an Elastic IP address automatically disassociates it from any instance that it's associated with. To disassociate an Elastic IP address without releasing it, use DisassociateAddress.

[Nondefault VPC] You must use DisassociateAddress to disassociate the Elastic IP address before you can release it. Otherwise, Amazon EC2 returns an error (InvalidIPAddress.InUse).

After releasing an Elastic IP address, it is released to the IP address pool. Be sure to update your DNS records and any servers or devices that communicate with the address. If you attempt to release an Elastic IP address that you already released, you'll get an AuthFailure error if the address is already allocated to another AWS account.

[EC2-VPC] After you release an Elastic IP address for use in a VPC, you might be able to recover it. For more information, see AllocateAddress.

" }, "ReleaseHosts":{ "name":"ReleaseHosts", @@ -2097,6 +2207,16 @@ "output":{"shape":"RequestSpotInstancesResult"}, "documentation":"

Creates a Spot instance request. Spot instances are instances that Amazon EC2 launches when the bid price that you specify exceeds the current Spot price. Amazon EC2 periodically sets the Spot price based on available Spot Instance capacity and current Spot instance requests. For more information, see Spot Instance Requests in the Amazon Elastic Compute Cloud User Guide.

" }, + "ResetFpgaImageAttribute":{ + "name":"ResetFpgaImageAttribute", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ResetFpgaImageAttributeRequest"}, + "output":{"shape":"ResetFpgaImageAttributeResult"}, + "documentation":"

Resets the specified attribute of the specified Amazon FPGA Image (AFI) to its default value. You can only reset the load permission attribute.

" + }, "ResetImageAttribute":{ "name":"ResetImageAttribute", "http":{ @@ -2150,7 +2270,7 @@ "requestUri":"/" }, "input":{"shape":"RevokeSecurityGroupEgressRequest"}, - "documentation":"

[EC2-VPC only] Removes one or more egress rules from a security group for EC2-VPC. This action doesn't apply to security groups for use in EC2-Classic. The values that you specify in the revoke request (for example, ports) must match the existing rule's values for the rule to be revoked.

Each rule consists of the protocol and the IPv4 or IPv6 CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

" + "documentation":"

[EC2-VPC only] Removes one or more egress rules from a security group for EC2-VPC. This action doesn't apply to security groups for use in EC2-Classic. To remove a rule, the values that you specify (for example, ports) must match the existing rule's values exactly.

Each rule consists of the protocol and the IPv4 or IPv6 CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code. If the security group rule has a description, you do not have to specify the description to revoke the rule.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

" }, "RevokeSecurityGroupIngress":{ "name":"RevokeSecurityGroupIngress", @@ -2159,7 +2279,7 @@ "requestUri":"/" }, "input":{"shape":"RevokeSecurityGroupIngressRequest"}, - "documentation":"

Removes one or more ingress rules from a security group. The values that you specify in the revoke request (for example, ports) must match the existing rule's values for the rule to be removed.

Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

" + "documentation":"

Removes one or more ingress rules from a security group. To remove a rule, the values that you specify (for example, ports) must match the existing rule's values exactly.

[EC2-Classic security groups only] If the values you specify do not match the existing rule's values, no error is returned. Use DescribeSecurityGroups to verify that the rule has been removed.

Each rule consists of the protocol and the CIDR range or source security group. For the TCP and UDP protocols, you must also specify the destination port or range of ports. For the ICMP protocol, you must also specify the ICMP type and code. If the security group rule has a description, you do not have to specify the description to revoke the rule.

Rule changes are propagated to instances within the security group as quickly as possible. However, a small delay might occur.

" }, "RunInstances":{ "name":"RunInstances", @@ -2169,7 +2289,7 @@ }, "input":{"shape":"RunInstancesRequest"}, "output":{"shape":"Reservation"}, - "documentation":"

Launches the specified number of instances using an AMI for which you have permissions.

You can specify a number of options, or leave the default options. The following rules apply:

To ensure faster instance launches, break up large requests into smaller batches. For example, create 5 separate launch requests for 100 instances each instead of 1 launch request for 500 instances.

An instance is ready for you to use when it's in the running state. You can check the state of your instance using DescribeInstances. You can tag instances and EBS volumes during launch, after launch, or both. For more information, see CreateTags and Tagging Your Amazon EC2 Resources.

Linux instances have access to the public key of the key pair at boot. You can use this key to provide secure access to the instance. Amazon EC2 public images use this feature to provide secure access without passwords. For more information, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

For troubleshooting, see What To Do If An Instance Immediately Terminates, and Troubleshooting Connecting to Your Instance in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Launches the specified number of instances using an AMI for which you have permissions.

You can specify a number of options, or leave the default options. The following rules apply:

To ensure faster instance launches, break up large requests into smaller batches. For example, create five separate launch requests for 100 instances each instead of one launch request for 500 instances.

An instance is ready for you to use when it's in the running state. You can check the state of your instance using DescribeInstances. You can tag instances and EBS volumes during launch, after launch, or both. For more information, see CreateTags and Tagging Your Amazon EC2 Resources.

Linux instances have access to the public key of the key pair at boot. You can use this key to provide secure access to the instance. Amazon EC2 public images use this feature to provide secure access without passwords. For more information, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.

For troubleshooting, see What To Do If An Instance Immediately Terminates, and Troubleshooting Connecting to Your Instance in the Amazon Elastic Compute Cloud User Guide.

" }, "RunScheduledInstances":{ "name":"RunScheduledInstances", @@ -2189,7 +2309,7 @@ }, "input":{"shape":"StartInstancesRequest"}, "output":{"shape":"StartInstancesResult"}, - "documentation":"

Starts an Amazon EBS-backed AMI that you've previously stopped.

Instances that use Amazon EBS volumes as their root devices can be quickly stopped and started. When an instance is stopped, the compute resources are released and you are not billed for hourly instance usage. However, your root partition Amazon EBS volume remains, continues to persist your data, and you are charged for Amazon EBS volume usage. You can restart your instance at any time. Each time you transition an instance from stopped to started, Amazon EC2 charges a full instance hour, even if transitions happen multiple times within a single hour.

Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Performing this operation on an instance that uses an instance store as its root device returns an error.

For more information, see Stopping Instances in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Starts an Amazon EBS-backed instance that you've previously stopped.

Instances that use Amazon EBS volumes as their root devices can be quickly stopped and started. When an instance is stopped, the compute resources are released and you are not billed for instance usage. However, your root partition Amazon EBS volume remains and continues to persist your data, and you are charged for Amazon EBS volume usage. You can restart your instance at any time. Every time you start your Windows instance, Amazon EC2 charges you for a full instance hour. If you stop and restart your Windows instance, a new instance hour begins and Amazon EC2 charges you for another full instance hour even if you are still within the same 60-minute period when it was stopped. Every time you start your Linux instance, Amazon EC2 charges a one-minute minimum for instance usage, and thereafter charges per second for instance usage.

Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Performing this operation on an instance that uses an instance store as its root device returns an error.

For more information, see Stopping Instances in the Amazon Elastic Compute Cloud User Guide.

" }, "StopInstances":{ "name":"StopInstances", @@ -2199,7 +2319,7 @@ }, "input":{"shape":"StopInstancesRequest"}, "output":{"shape":"StopInstancesResult"}, - "documentation":"

Stops an Amazon EBS-backed instance.

We don't charge hourly usage for a stopped instance, or data transfer fees; however, your root partition Amazon EBS volume remains, continues to persist your data, and you are charged for Amazon EBS volume usage. Each time you transition an instance from stopped to started, Amazon EC2 charges a full instance hour, even if transitions happen multiple times within a single hour.

You can't start or stop Spot instances, and you can't stop instance store-backed instances.

When you stop an instance, we shut it down. You can restart your instance at any time. Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Stopping an instance is different to rebooting or terminating it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between rebooting, stopping, and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.

When you stop an instance, we attempt to shut it down forcibly after a short while. If your instance appears stuck in the stopping state after a period of time, there may be an issue with the underlying host computer. For more information, see Troubleshooting Stopping Your Instance in the Amazon Elastic Compute Cloud User Guide.

" + "documentation":"

Stops an Amazon EBS-backed instance.

We don't charge usage for a stopped instance, or data transfer fees; however, your root partition Amazon EBS volume remains and continues to persist your data, and you are charged for Amazon EBS volume usage. Every time you start your Windows instance, Amazon EC2 charges you for a full instance hour. If you stop and restart your Windows instance, a new instance hour begins and Amazon EC2 charges you for another full instance hour even if you are still within the same 60-minute period when it was stopped. Every time you start your Linux instance, Amazon EC2 charges a one-minute minimum for instance usage, and thereafter charges per second for instance usage.

You can't start or stop Spot Instances, and you can't stop instance store-backed instances.

When you stop an instance, we shut it down. You can restart your instance at any time. Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.

Stopping an instance is different to rebooting or terminating it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between rebooting, stopping, and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.

When you stop an instance, we attempt to shut it down forcibly after a short while. If your instance appears stuck in the stopping state after a period of time, there may be an issue with the underlying host computer. For more information, see Troubleshooting Stopping Your Instance in the Amazon Elastic Compute Cloud User Guide.

" }, "TerminateInstances":{ "name":"TerminateInstances", @@ -2239,6 +2359,26 @@ "input":{"shape":"UnmonitorInstancesRequest"}, "output":{"shape":"UnmonitorInstancesResult"}, "documentation":"

Disables detailed monitoring for a running instance. For more information, see Monitoring Your Instances and Volumes in the Amazon Elastic Compute Cloud User Guide.

" + }, + "UpdateSecurityGroupRuleDescriptionsEgress":{ + "name":"UpdateSecurityGroupRuleDescriptionsEgress", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateSecurityGroupRuleDescriptionsEgressRequest"}, + "output":{"shape":"UpdateSecurityGroupRuleDescriptionsEgressResult"}, + "documentation":"

[EC2-VPC only] Updates the description of an egress (outbound) security group rule. You can replace an existing description, or add a description to a rule that did not have one previously.

You specify the description as part of the IP permissions structure. You can remove a description for a security group rule by omitting the description parameter in the request.

" + }, + "UpdateSecurityGroupRuleDescriptionsIngress":{ + "name":"UpdateSecurityGroupRuleDescriptionsIngress", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateSecurityGroupRuleDescriptionsIngressRequest"}, + "output":{"shape":"UpdateSecurityGroupRuleDescriptionsIngressResult"}, + "documentation":"

Updates the description of an ingress (inbound) security group rule. You can replace an existing description, or add a description to a rule that did not have one previously.

You specify the description as part of the IP permissions structure. You can remove a description for a security group rule by omitting the description parameter in the request.

" } }, "shapes":{ @@ -2252,12 +2392,12 @@ }, "ReservedInstanceIds":{ "shape":"ReservedInstanceIdSet", - "documentation":"

The IDs of the Convertible Reserved Instances to exchange for other Convertible Reserved Instances of the same or higher value.

", + "documentation":"

The IDs of the Convertible Reserved Instances to exchange for another Convertible Reserved Instance of the same or higher value.

", "locationName":"ReservedInstanceId" }, "TargetConfigurations":{ "shape":"TargetConfigurationRequestSet", - "documentation":"

The configurations of the Convertible Reserved Instance offerings that you are purchasing in this exchange.

", + "documentation":"

The configuration of the target Convertible Reserved Instance to exchange for your current Convertible Reserved Instances.

", "locationName":"TargetConfiguration" } }, @@ -2465,6 +2605,10 @@ "shape":"DomainType", "documentation":"

Set to vpc to allocate the address for use with instances in a VPC.

Default: The address is for use with instances in EC2-Classic.

" }, + "Address":{ + "shape":"String", + "documentation":"

[EC2-VPC] The Elastic IP address to recover.

" + }, "DryRun":{ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", @@ -2813,6 +2957,10 @@ "documentation":"

Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IPv6 addresses, or the size of the CIDR block.

", "locationName":"amazonProvidedIpv6CidrBlock" }, + "CidrBlock":{ + "shape":"String", + "documentation":"

An IPv4 CIDR block to associate with the VPC.

" + }, "VpcId":{ "shape":"String", "documentation":"

The ID of the VPC.

", @@ -2828,6 +2976,11 @@ "documentation":"

Information about the IPv6 CIDR block association.

", "locationName":"ipv6CidrBlockAssociation" }, + "CidrBlockAssociation":{ + "shape":"VpcCidrBlockAssociation", + "documentation":"

Information about the IPv4 CIDR block association.

", + "locationName":"cidrBlockAssociation" + }, "VpcId":{ "shape":"String", "documentation":"

The ID of the VPC.

", @@ -2961,7 +3114,7 @@ "members":{ "Device":{ "shape":"String", - "documentation":"

The device name to expose to the instance (for example, /dev/sdh or xvdh).

" + "documentation":"

The device name (for example, /dev/sdh or xvdh).

" }, "InstanceId":{ "shape":"String", @@ -3060,37 +3213,37 @@ }, "IpPermissions":{ "shape":"IpPermissionList", - "documentation":"

A set of IP permissions. You can't specify a destination security group and a CIDR IP address range.

", + "documentation":"

One or more sets of IP permissions. You can't specify a destination security group and a CIDR IP address range in the same set of permissions.

", "locationName":"ipPermissions" }, "CidrIp":{ "shape":"String", - "documentation":"

The CIDR IPv4 address range. We recommend that you specify the CIDR range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the CIDR.

", "locationName":"cidrIp" }, "FromPort":{ "shape":"Integer", - "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP type number. We recommend that you specify the port range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the port.

", "locationName":"fromPort" }, "IpProtocol":{ "shape":"String", - "documentation":"

The IP protocol name or number. We recommend that you specify the protocol in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the protocol name or number.

", "locationName":"ipProtocol" }, "ToPort":{ "shape":"Integer", - "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP type number. We recommend that you specify the port range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the port.

", "locationName":"toPort" }, "SourceSecurityGroupName":{ "shape":"String", - "documentation":"

The name of a destination security group. To authorize outbound access to a destination security group, we recommend that you use a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify a destination security group.

", "locationName":"sourceSecurityGroupName" }, "SourceSecurityGroupOwnerId":{ "shape":"String", - "documentation":"

The AWS account number for a destination security group. To authorize outbound access to a destination security group, we recommend that you use a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify a destination security group.

", "locationName":"sourceSecurityGroupOwnerId" } }, @@ -3105,19 +3258,19 @@ }, "FromPort":{ "shape":"Integer", - "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type number. For the ICMP/ICMPv6 type number, use -1 to specify all types.

" + "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type number. For the ICMP/ICMPv6 type number, use -1 to specify all types. If you specify all ICMP/ICMPv6 types, you must specify all codes.

" }, "GroupId":{ "shape":"String", - "documentation":"

The ID of the security group. Required for a nondefault VPC.

" + "documentation":"

The ID of the security group. You must specify either the security group ID or the security group name in the request. For security groups in a nondefault VPC, you must specify the security group ID.

" }, "GroupName":{ "shape":"String", - "documentation":"

[EC2-Classic, default VPC] The name of the security group.

" + "documentation":"

[EC2-Classic, default VPC] The name of the security group. You must specify either the security group ID or the security group name in the request.

" }, "IpPermissions":{ "shape":"IpPermissionList", - "documentation":"

A set of IP permissions. Can be used to specify multiple rules in a single command.

" + "documentation":"

One or more sets of IP permissions. Can be used to specify multiple rules in a single command.

" }, "IpProtocol":{ "shape":"String", @@ -3129,11 +3282,11 @@ }, "SourceSecurityGroupOwnerId":{ "shape":"String", - "documentation":"

[EC2-Classic] The AWS account number for the source security group, if the source security group is in a different account. You can't specify this parameter in combination with the following parameters: the CIDR IP address range, the IP protocol, the start of the port range, and the end of the port range. Creates rules that grant full ICMP, UDP, and TCP access. To create a rule with a specific IP protocol and port range, use a set of IP permissions instead.

" + "documentation":"

[EC2-Classic] The AWS account ID for the source security group, if the source security group is in a different account. You can't specify this parameter in combination with the following parameters: the CIDR IP address range, the IP protocol, the start of the port range, and the end of the port range. Creates rules that grant full ICMP, UDP, and TCP access. To create a rule with a specific IP protocol and port range, use a set of IP permissions instead.

" }, "ToPort":{ "shape":"Integer", - "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code number. For the ICMP/ICMPv6 code number, use -1 to specify all codes.

" + "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code number. For the ICMP/ICMPv6 code number, use -1 to specify all codes. If you specify all ICMP/ICMPv6 types, you must specify all codes.

" }, "DryRun":{ "shape":"Boolean", @@ -3267,7 +3420,7 @@ "members":{ "DeviceName":{ "shape":"String", - "documentation":"

The device name exposed to the instance (for example, /dev/sdh or xvdh).

", + "documentation":"

The device name (for example, /dev/sdh or xvdh).

", "locationName":"deviceName" }, "VirtualName":{ @@ -3739,6 +3892,24 @@ "locationName":"item" } }, + "CidrBlock":{ + "type":"structure", + "members":{ + "CidrBlock":{ + "shape":"String", + "documentation":"

The IPv4 CIDR block.

", + "locationName":"cidrBlock" + } + }, + "documentation":"

Describes an IPv4 CIDR block.

" + }, + "CidrBlockSet":{ + "type":"list", + "member":{ + "shape":"CidrBlock", + "locationName":"item" + } + }, "ClassicLinkDnsSupport":{ "type":"structure", "members":{ @@ -3795,6 +3966,39 @@ "locationName":"item" } }, + "ClassicLoadBalancer":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"String", + "documentation":"

The name of the load balancer.

", + "locationName":"name" + } + }, + "documentation":"

Describes a Classic Load Balancer.

" + }, + "ClassicLoadBalancers":{ + "type":"list", + "member":{ + "shape":"ClassicLoadBalancer", + "locationName":"item" + }, + "max":5, + "min":1 + }, + "ClassicLoadBalancersConfig":{ + "type":"structure", + "required":["ClassicLoadBalancers"], + "members":{ + "ClassicLoadBalancers":{ + "shape":"ClassicLoadBalancers", + "documentation":"

One or more Classic Load Balancers.

", + "locationName":"classicLoadBalancers" + } + }, + "documentation":"

Describes the Classic Load Balancers to attach to a Spot fleet. Spot fleet registers the running Spot instances with these Classic Load Balancers.

" + }, "ClientData":{ "type":"structure", "members":{ @@ -3921,6 +4125,49 @@ "completed" ] }, + "CopyFpgaImageRequest":{ + "type":"structure", + "required":[ + "SourceFpgaImageId", + "SourceRegion" + ], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "SourceFpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the source AFI.

" + }, + "Description":{ + "shape":"String", + "documentation":"

The description for the new AFI.

" + }, + "Name":{ + "shape":"String", + "documentation":"

The name for the new AFI. The default is the name of the source AFI.

" + }, + "SourceRegion":{ + "shape":"String", + "documentation":"

The region that contains the source AFI.

" + }, + "ClientToken":{ + "shape":"String", + "documentation":"

Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see Ensuring Idempotency.

" + } + } + }, + "CopyFpgaImageResult":{ + "type":"structure", + "members":{ + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the new AFI.

", + "locationName":"fpgaImageId" + } + } + }, "CopyImageRequest":{ "type":"structure", "required":[ @@ -4076,6 +4323,51 @@ }, "documentation":"

Contains the output of CreateCustomerGateway.

" }, + "CreateDefaultSubnetRequest":{ + "type":"structure", + "required":["AvailabilityZone"], + "members":{ + "AvailabilityZone":{ + "shape":"String", + "documentation":"

The Availability Zone in which to create the default subnet.

" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + } + } + }, + "CreateDefaultSubnetResult":{ + "type":"structure", + "members":{ + "Subnet":{ + "shape":"Subnet", + "documentation":"

Information about the subnet.

", + "locationName":"subnet" + } + } + }, + "CreateDefaultVpcRequest":{ + "type":"structure", + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + } + }, + "documentation":"

Contains the parameters for CreateDefaultVpc.

" + }, + "CreateDefaultVpcResult":{ + "type":"structure", + "members":{ + "Vpc":{ + "shape":"Vpc", + "documentation":"

Information about the VPC.

", + "locationName":"vpc" + } + }, + "documentation":"

Contains the output of CreateDefaultVpc.

" + }, "CreateDhcpOptionsRequest":{ "type":"structure", "required":["DhcpConfigurations"], @@ -4497,6 +4789,47 @@ }, "documentation":"

Contains the output of CreateNetworkAcl.

" }, + "CreateNetworkInterfacePermissionRequest":{ + "type":"structure", + "required":[ + "NetworkInterfaceId", + "Permission" + ], + "members":{ + "NetworkInterfaceId":{ + "shape":"String", + "documentation":"

The ID of the network interface.

" + }, + "AwsAccountId":{ + "shape":"String", + "documentation":"

The AWS account ID.

" + }, + "AwsService":{ + "shape":"String", + "documentation":"

The AWS service. Currently not supported.

" + }, + "Permission":{ + "shape":"InterfacePermissionType", + "documentation":"

The type of permission to grant.

" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + } + }, + "documentation":"

Contains the parameters for CreateNetworkInterfacePermission.

" + }, + "CreateNetworkInterfacePermissionResult":{ + "type":"structure", + "members":{ + "InterfacePermission":{ + "shape":"NetworkInterfacePermission", + "documentation":"

Information about the permission for the network interface.

", + "locationName":"interfacePermission" + } + }, + "documentation":"

Contains the output of CreateNetworkInterfacePermission.

" + }, "CreateNetworkInterfaceRequest":{ "type":"structure", "required":["SubnetId"], @@ -4969,34 +5302,52 @@ "CreateVpcEndpointRequest":{ "type":"structure", "required":[ - "ServiceName", - "VpcId" + "VpcId", + "ServiceName" ], "members":{ - "ClientToken":{ - "shape":"String", - "documentation":"

Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.

" - }, "DryRun":{ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" }, + "VpcEndpointType":{ + "shape":"VpcEndpointType", + "documentation":"

The type of endpoint. If not specified, the default is a gateway endpoint.

" + }, + "VpcId":{ + "shape":"String", + "documentation":"

The ID of the VPC in which the endpoint will be used.

" + }, + "ServiceName":{ + "shape":"String", + "documentation":"

The AWS service name, in the form com.amazonaws.region.service . To get a list of available services, use the DescribeVpcEndpointServices request.

" + }, "PolicyDocument":{ "shape":"String", - "documentation":"

A policy to attach to the endpoint that controls access to the service. The policy must be in valid JSON format. If this parameter is not specified, we attach a default policy that allows full access to the service.

" + "documentation":"

(Gateway endpoint) A policy to attach to the endpoint that controls access to the service. The policy must be in valid JSON format. If this parameter is not specified, we attach a default policy that allows full access to the service.

" }, "RouteTableIds":{ "shape":"ValueStringList", - "documentation":"

One or more route table IDs.

", + "documentation":"

(Gateway endpoint) One or more route table IDs.

", "locationName":"RouteTableId" }, - "ServiceName":{ - "shape":"String", - "documentation":"

The AWS service name, in the form com.amazonaws.region.service . To get a list of available services, use the DescribeVpcEndpointServices request.

" + "SubnetIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) The ID of one or more subnets in which to create a network interface for the endpoint.

", + "locationName":"SubnetId" }, - "VpcId":{ + "SecurityGroupIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) The ID of one or more security groups to associate with the network interface.

", + "locationName":"SecurityGroupId" + }, + "ClientToken":{ "shape":"String", - "documentation":"

The ID of the VPC in which the endpoint will be used.

" + "documentation":"

Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.

" + }, + "PrivateDnsEnabled":{ + "shape":"Boolean", + "documentation":"

(Interface endpoint) Indicate whether to associate a private hosted zone with the specified VPC. The private hosted zone contains a record set for the default public DNS name for the service for the region (for example, kinesis.us-east-1.amazonaws.com) which resolves to the private IP addresses of the endpoint network interfaces in the VPC. This enables you to make requests to the default public DNS name for the service instead of the public DNS names that are automatically generated by the VPC endpoint service.

To use a private hosted zone, you must set the following VPC attributes to true: enableDnsHostnames and enableDnsSupport. Use ModifyVpcAttribute to set the VPC attributes.

Default: true

" } }, "documentation":"

Contains the parameters for CreateVpcEndpoint.

" @@ -5004,15 +5355,15 @@ "CreateVpcEndpointResult":{ "type":"structure", "members":{ - "ClientToken":{ - "shape":"String", - "documentation":"

Unique, case-sensitive identifier you provide to ensure the idempotency of the request.

", - "locationName":"clientToken" - }, "VpcEndpoint":{ "shape":"VpcEndpoint", "documentation":"

Information about the endpoint.

", "locationName":"vpcEndpoint" + }, + "ClientToken":{ + "shape":"String", + "documentation":"

Unique, case-sensitive identifier you provide to ensure the idempotency of the request.

", + "locationName":"clientToken" } }, "documentation":"

Contains the output of CreateVpcEndpoint.

" @@ -5118,7 +5469,7 @@ }, "Options":{ "shape":"VpnConnectionOptionsSpecification", - "documentation":"

Indicates whether the VPN connection requires static routes. If you are creating a VPN connection for a device that does not support BGP, you must specify true.

Default: false

", + "documentation":"

The options for the VPN connection.

", "locationName":"options" } }, @@ -5165,6 +5516,10 @@ "shape":"GatewayType", "documentation":"

The type of VPN connection this virtual private gateway supports.

" }, + "AmazonSideAsn":{ + "shape":"Long", + "documentation":"

A private Autonomous System Number (ASN) for the Amazon side of a BGP session. If you're using a 16-bit ASN, it must be in the 64512 to 65534 range. If you're using a 32-bit ASN, it must be in the 4200000000 to 4294967294 range.

Default: 64512

" + }, "DryRun":{ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", @@ -5325,16 +5680,40 @@ }, "documentation":"

Contains the output of DeleteFlowLogs.

" }, - "DeleteInternetGatewayRequest":{ + "DeleteFpgaImageRequest":{ "type":"structure", - "required":["InternetGatewayId"], + "required":["FpgaImageId"], "members":{ "DryRun":{ "shape":"Boolean", - "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", - "locationName":"dryRun" + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" }, - "InternetGatewayId":{ + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the AFI.

" + } + } + }, + "DeleteFpgaImageResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "documentation":"

Is true if the request succeeds, and an error otherwise.

", + "locationName":"return" + } + } + }, + "DeleteInternetGatewayRequest":{ + "type":"structure", + "required":["InternetGatewayId"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", + "locationName":"dryRun" + }, + "InternetGatewayId":{ "shape":"String", "documentation":"

The ID of the Internet gateway.

", "locationName":"internetGatewayId" @@ -5428,6 +5807,36 @@ }, "documentation":"

Contains the parameters for DeleteNetworkAcl.

" }, + "DeleteNetworkInterfacePermissionRequest":{ + "type":"structure", + "required":["NetworkInterfacePermissionId"], + "members":{ + "NetworkInterfacePermissionId":{ + "shape":"String", + "documentation":"

The ID of the network interface permission.

" + }, + "Force":{ + "shape":"Boolean", + "documentation":"

Specify true to remove the permission even if the network interface is attached to an instance.

" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + } + }, + "documentation":"

Contains the parameters for DeleteNetworkInterfacePermission.

" + }, + "DeleteNetworkInterfacePermissionResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "documentation":"

Returns true if the request succeeds, otherwise returns an error.

", + "locationName":"return" + } + }, + "documentation":"

Contains the output for DeleteNetworkInterfacePermission.

" + }, "DeleteNetworkInterfaceRequest":{ "type":"structure", "required":["NetworkInterfaceId"], @@ -5579,12 +5988,12 @@ }, "Resources":{ "shape":"ResourceIdList", - "documentation":"

The ID of the resource. For example, ami-1a2b3c4d. You can specify more than one resource ID.

", + "documentation":"

The IDs of one or more resources.

", "locationName":"resourceId" }, "Tags":{ "shape":"TagList", - "documentation":"

One or more tags to delete. If you omit the value parameter, we delete the tag regardless of its value. If you specify this parameter with an empty string as the value, we delete the key only if its value is an empty string.

", + "documentation":"

One or more tags to delete. If you omit this parameter, we delete all tags for the specified resources. Specify a tag key and an optional tag value to delete specific tags. If you specify a tag key without a tag value, we delete any tag with this key regardless of its value. If you specify a tag key with an empty string as the tag value, we delete the tag only if its value is an empty string.

", "locationName":"tag" } }, @@ -6053,6 +6462,53 @@ } } }, + "DescribeElasticGpusRequest":{ + "type":"structure", + "members":{ + "ElasticGpuIds":{ + "shape":"ElasticGpuIdSet", + "documentation":"

One or more Elastic GPU IDs.

", + "locationName":"ElasticGpuId" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "Filters":{ + "shape":"FilterList", + "documentation":"

One or more filters.

", + "locationName":"Filter" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"

The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken value. This value can be between 5 and 1000.

" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to request the next page of results.

" + } + } + }, + "DescribeElasticGpusResult":{ + "type":"structure", + "members":{ + "ElasticGpuSet":{ + "shape":"ElasticGpuSet", + "documentation":"

Information about the Elastic GPUs.

", + "locationName":"elasticGpuSet" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"

The total number of items to return. If the total number of items available is more than the value specified in max-items then a Next-Token will be provided in the output that you can use to resume pagination.

", + "locationName":"maxResults" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", + "locationName":"nextToken" + } + } + }, "DescribeExportTasksRequest":{ "type":"structure", "members":{ @@ -6114,6 +6570,37 @@ }, "documentation":"

Contains the output of DescribeFlowLogs.

" }, + "DescribeFpgaImageAttributeRequest":{ + "type":"structure", + "required":[ + "FpgaImageId", + "Attribute" + ], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the AFI.

" + }, + "Attribute":{ + "shape":"FpgaImageAttributeName", + "documentation":"

The AFI attribute.

" + } + } + }, + "DescribeFpgaImageAttributeResult":{ + "type":"structure", + "members":{ + "FpgaImageAttribute":{ + "shape":"FpgaImageAttribute", + "documentation":"

Information about the attribute.

", + "locationName":"fpgaImageAttribute" + } + } + }, "DescribeFpgaImagesRequest":{ "type":"structure", "members":{ @@ -6403,7 +6890,7 @@ }, "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "ImageIds":{ @@ -6601,7 +7088,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "InstanceIds":{ @@ -6759,7 +7246,7 @@ "members":{ "Filter":{ "shape":"FilterList", - "documentation":"

One or more filters.

" + "documentation":"

One or more filters.

" }, "MaxResults":{ "shape":"Integer", @@ -6878,6 +7365,46 @@ }, "documentation":"

Contains the output of DescribeNetworkInterfaceAttribute.

" }, + "DescribeNetworkInterfacePermissionsRequest":{ + "type":"structure", + "members":{ + "NetworkInterfacePermissionIds":{ + "shape":"NetworkInterfacePermissionIdList", + "documentation":"

One or more network interface permission IDs.

", + "locationName":"NetworkInterfacePermissionId" + }, + "Filters":{ + "shape":"FilterList", + "documentation":"

One or more filters.

", + "locationName":"Filter" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to request the next page of results.

" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"

The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken value. If this parameter is not specified, up to 50 results are returned by default.

" + } + }, + "documentation":"

Contains the parameters for DescribeNetworkInterfacePermissions.

" + }, + "DescribeNetworkInterfacePermissionsResult":{ + "type":"structure", + "members":{ + "NetworkInterfacePermissions":{ + "shape":"NetworkInterfacePermissionList", + "documentation":"

The network interface permissions.

", + "locationName":"networkInterfacePermissions" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to use to retrieve the next page of results.

", + "locationName":"nextToken" + } + }, + "documentation":"

Contains the output for DescribeNetworkInterfacePermissions.

" + }, "DescribeNetworkInterfacesRequest":{ "type":"structure", "members":{ @@ -7385,7 +7912,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters. If using multiple filters for rules, the results include security groups for which any combination of rules - not necessarily a single rule - match all filters.

", + "documentation":"

One or more filters. If using multiple filters for rules, the results include security groups for which any combination of rules - not necessarily a single rule - match all filters.

", "locationName":"Filter" }, "GroupIds":{ @@ -7402,6 +7929,14 @@ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

", "locationName":"dryRun" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to request the next page of results.

" + }, + "MaxResults":{ + "shape":"Integer", + "documentation":"

The maximum number of results to return in a single call. To retrieve the remaining results, make another request with the returned NextToken value. This value can be between 5 and 1000.

" } }, "documentation":"

Contains the parameters for DescribeSecurityGroups.

" @@ -7413,6 +7948,11 @@ "shape":"SecurityGroupList", "documentation":"

Information about one or more security groups.

", "locationName":"securityGroupInfo" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to use to retrieve the next page of results. This value is null when there are no more results to return.

", + "locationName":"nextToken" } }, "documentation":"

Contains the output of DescribeSecurityGroups.

" @@ -7466,7 +8006,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "MaxResults":{ @@ -7715,7 +8255,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "DryRun":{ @@ -8053,7 +8593,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "VolumeIds":{ @@ -8214,6 +8754,16 @@ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" }, + "ServiceNames":{ + "shape":"ValueStringList", + "documentation":"

One or more service names.

", + "locationName":"ServiceName" + }, + "Filters":{ + "shape":"FilterList", + "documentation":"

One or more filters.

", + "locationName":"Filter" + }, "MaxResults":{ "shape":"Integer", "documentation":"

The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.

Constraint: If the value is greater than 1000, we return only 1000 items.

" @@ -8228,15 +8778,20 @@ "DescribeVpcEndpointServicesResult":{ "type":"structure", "members":{ - "NextToken":{ - "shape":"String", - "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", - "locationName":"nextToken" - }, "ServiceNames":{ "shape":"ValueStringList", "documentation":"

A list of supported AWS services.

", "locationName":"serviceNameSet" + }, + "ServiceDetails":{ + "shape":"ServiceDetailSet", + "documentation":"

Information about the service.

", + "locationName":"serviceDetailSet" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", + "locationName":"nextToken" } }, "documentation":"

Contains the output of DescribeVpcEndpointServices.

" @@ -8248,6 +8803,11 @@ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" }, + "VpcEndpointIds":{ + "shape":"ValueStringList", + "documentation":"

One or more endpoint IDs.

", + "locationName":"VpcEndpointId" + }, "Filters":{ "shape":"FilterList", "documentation":"

One or more filters.

", @@ -8260,11 +8820,6 @@ "NextToken":{ "shape":"String", "documentation":"

The token for the next set of items to return. (You received this token from a prior call.)

" - }, - "VpcEndpointIds":{ - "shape":"ValueStringList", - "documentation":"

One or more endpoint IDs.

", - "locationName":"VpcEndpointId" } }, "documentation":"

Contains the parameters for DescribeVpcEndpoints.

" @@ -8272,15 +8827,15 @@ "DescribeVpcEndpointsResult":{ "type":"structure", "members":{ - "NextToken":{ - "shape":"String", - "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", - "locationName":"nextToken" - }, "VpcEndpoints":{ "shape":"VpcEndpointSet", "documentation":"

Information about the endpoints.

", "locationName":"vpcEndpointSet" + }, + "NextToken":{ + "shape":"String", + "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

", + "locationName":"nextToken" } }, "documentation":"

Contains the output of DescribeVpcEndpoints.

" @@ -8322,7 +8877,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "VpcIds":{ @@ -8386,7 +8941,7 @@ "members":{ "Filters":{ "shape":"FilterList", - "documentation":"

One or more filters.

", + "documentation":"

One or more filters.

", "locationName":"Filter" }, "VpnGatewayIds":{ @@ -8787,6 +9342,11 @@ "documentation":"

Information about the IPv6 CIDR block association.

", "locationName":"ipv6CidrBlockAssociation" }, + "CidrBlockAssociation":{ + "shape":"VpcCidrBlockAssociation", + "documentation":"

Information about the IPv4 CIDR block association.

", + "locationName":"cidrBlockAssociation" + }, "VpcId":{ "shape":"String", "documentation":"

The ID of the VPC.

", @@ -8898,6 +9458,29 @@ }, "documentation":"

Describes a disk image volume.

" }, + "DnsEntry":{ + "type":"structure", + "members":{ + "DnsName":{ + "shape":"String", + "documentation":"

The DNS name.

", + "locationName":"dnsName" + }, + "HostedZoneId":{ + "shape":"String", + "documentation":"

The ID of the private hosted zone.

", + "locationName":"hostedZoneId" + } + }, + "documentation":"

Describes a DNS entry.

" + }, + "DnsEntrySet":{ + "type":"list", + "member":{ + "shape":"DnsEntry", + "locationName":"item" + } + }, "DomainType":{ "type":"string", "enum":[ @@ -8911,7 +9494,7 @@ "members":{ "Encrypted":{ "shape":"Boolean", - "documentation":"

Indicates whether the EBS volume is encrypted. Encrypted Amazon EBS volumes may only be attached to instances that support Amazon EBS encryption.

", + "documentation":"

Indicates whether the EBS volume is encrypted. Encrypted volumes can only be attached to instances that support Amazon EBS encryption. If you are creating a volume from a snapshot, you can't specify an encryption value. This is because only blank volumes can be encrypted on creation.

", "locationName":"encrypted" }, "DeleteOnTermination":{ @@ -9015,6 +9598,129 @@ "locationName":"item" } }, + "ElasticGpuAssociation":{ + "type":"structure", + "members":{ + "ElasticGpuId":{ + "shape":"String", + "documentation":"

The ID of the Elastic GPU.

", + "locationName":"elasticGpuId" + }, + "ElasticGpuAssociationId":{ + "shape":"String", + "documentation":"

The ID of the association.

", + "locationName":"elasticGpuAssociationId" + }, + "ElasticGpuAssociationState":{ + "shape":"String", + "documentation":"

The state of the association between the instance and the Elastic GPU.

", + "locationName":"elasticGpuAssociationState" + }, + "ElasticGpuAssociationTime":{ + "shape":"String", + "documentation":"

The time the Elastic GPU was associated with the instance.

", + "locationName":"elasticGpuAssociationTime" + } + }, + "documentation":"

Describes the association between an instance and an Elastic GPU.

" + }, + "ElasticGpuAssociationList":{ + "type":"list", + "member":{ + "shape":"ElasticGpuAssociation", + "locationName":"item" + } + }, + "ElasticGpuHealth":{ + "type":"structure", + "members":{ + "Status":{ + "shape":"ElasticGpuStatus", + "documentation":"

The health status.

", + "locationName":"status" + } + }, + "documentation":"

Describes the status of an Elastic GPU.

" + }, + "ElasticGpuIdSet":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"item" + } + }, + "ElasticGpuSet":{ + "type":"list", + "member":{ + "shape":"ElasticGpus", + "locationName":"item" + } + }, + "ElasticGpuSpecification":{ + "type":"structure", + "required":["Type"], + "members":{ + "Type":{ + "shape":"String", + "documentation":"

The type of Elastic GPU.

" + } + }, + "documentation":"

A specification for an Elastic GPU.

" + }, + "ElasticGpuSpecifications":{ + "type":"list", + "member":{ + "shape":"ElasticGpuSpecification", + "locationName":"item" + } + }, + "ElasticGpuState":{ + "type":"string", + "enum":["ATTACHED"] + }, + "ElasticGpuStatus":{ + "type":"string", + "enum":[ + "OK", + "IMPAIRED" + ] + }, + "ElasticGpus":{ + "type":"structure", + "members":{ + "ElasticGpuId":{ + "shape":"String", + "documentation":"

The ID of the Elastic GPU.

", + "locationName":"elasticGpuId" + }, + "AvailabilityZone":{ + "shape":"String", + "documentation":"

The Availability Zone in the which the Elastic GPU resides.

", + "locationName":"availabilityZone" + }, + "ElasticGpuType":{ + "shape":"String", + "documentation":"

The type of Elastic GPU.

", + "locationName":"elasticGpuType" + }, + "ElasticGpuHealth":{ + "shape":"ElasticGpuHealth", + "documentation":"

The status of the Elastic GPU.

", + "locationName":"elasticGpuHealth" + }, + "ElasticGpuState":{ + "shape":"ElasticGpuState", + "documentation":"

The state of the Elastic GPU.

", + "locationName":"elasticGpuState" + }, + "InstanceId":{ + "shape":"String", + "documentation":"

The ID of the instance to which the Elastic GPU is attached.

", + "locationName":"instanceId" + } + }, + "documentation":"

Describes an Elastic GPU.

" + }, "EnableVgwRoutePropagationRequest":{ "type":"structure", "required":[ @@ -9119,7 +9825,7 @@ }, "EventSubType":{ "shape":"String", - "documentation":"

The event.

The following are the error events.

The following are the fleetRequestChange events.

The following are the instanceChange events.

", + "documentation":"

The event.

The following are the error events:

The following are the fleetRequestChange events:

The following are the instanceChange events:

The following are the Information events:

", "locationName":"eventSubType" }, "InstanceId":{ @@ -9434,9 +10140,54 @@ "shape":"TagList", "documentation":"

Any tags assigned to the AFI.

", "locationName":"tags" + }, + "Public":{ + "shape":"Boolean", + "documentation":"

Indicates whether the AFI is public.

", + "locationName":"public" + } + }, + "documentation":"

Describes an Amazon FPGA image (AFI).

" + }, + "FpgaImageAttribute":{ + "type":"structure", + "members":{ + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the AFI.

", + "locationName":"fpgaImageId" + }, + "Name":{ + "shape":"String", + "documentation":"

The name of the AFI.

", + "locationName":"name" + }, + "Description":{ + "shape":"String", + "documentation":"

The description of the AFI.

", + "locationName":"description" + }, + "LoadPermissions":{ + "shape":"LoadPermissionList", + "documentation":"

One or more load permissions.

", + "locationName":"loadPermissions" + }, + "ProductCodes":{ + "shape":"ProductCodeList", + "documentation":"

One or more product codes.

", + "locationName":"productCodes" } }, - "documentation":"

Describes an Amazon FPGA image (AFI).

" + "documentation":"

Describes an Amazon FPGA image (AFI) attribute.

" + }, + "FpgaImageAttributeName":{ + "type":"string", + "enum":[ + "description", + "name", + "loadPermission", + "productCodes" + ] }, "FpgaImageIdList":{ "type":"list", @@ -9621,7 +10372,7 @@ }, "PasswordData":{ "shape":"String", - "documentation":"

The password of the instance.

", + "documentation":"

The password of the instance. Returns an empty string if the password is not available.

", "locationName":"passwordData" }, "Timestamp":{ @@ -9647,7 +10398,7 @@ }, "TargetConfigurations":{ "shape":"TargetConfigurationRequestSet", - "documentation":"

The configuration requirements of the Convertible Reserved Instances to exchange for your current Convertible Reserved Instances.

", + "documentation":"

The configuration of the target Convertible Reserved Instance to exchange for your current Convertible Reserved Instances.

", "locationName":"TargetConfiguration" } }, @@ -9734,6 +10485,13 @@ "locationName":"item" } }, + "GroupIdentifierSet":{ + "type":"list", + "member":{ + "shape":"SecurityGroupIdentifier", + "locationName":"item" + } + }, "GroupIds":{ "type":"list", "member":{ @@ -9763,7 +10521,7 @@ }, "EventType":{ "shape":"EventType", - "documentation":"

The event type.

", + "documentation":"

The event type.

", "locationName":"eventType" }, "Timestamp":{ @@ -9905,7 +10663,10 @@ }, "HostOfferingSet":{ "type":"list", - "member":{"shape":"HostOffering"} + "member":{ + "shape":"HostOffering", + "locationName":"item" + } }, "HostProperties":{ "type":"structure", @@ -10013,7 +10774,10 @@ }, "HostReservationSet":{ "type":"list", - "member":{"shape":"HostReservation"} + "member":{ + "shape":"HostReservation", + "locationName":"item" + } }, "HostTenancy":{ "type":"string", @@ -10247,7 +11011,7 @@ }, "RootDeviceName":{ "shape":"String", - "documentation":"

The device name of the root device (for example, /dev/sda1 or /dev/xvda).

", + "documentation":"

The device name of the root device volume (for example, /dev/sda1).

", "locationName":"rootDeviceName" }, "RootDeviceType":{ @@ -11035,7 +11799,7 @@ }, "PrivateDnsName":{ "shape":"String", - "documentation":"

(IPv4 only) The private DNS hostname name assigned to the instance. This DNS hostname can only be used inside the Amazon EC2 network. This name is not available until the instance enters the running state.

[EC2-VPC] The Amazon-provided DNS server will resolve Amazon-provided private DNS hostnames if you've enabled DNS resolution and DNS hostnames in your VPC. If you are not using the Amazon-provided DNS server in your VPC, your custom domain name servers must resolve the hostname as appropriate.

", + "documentation":"

(IPv4 only) The private DNS hostname name assigned to the instance. This DNS hostname can only be used inside the Amazon EC2 network. This name is not available until the instance enters the running state.

[EC2-VPC] The Amazon-provided DNS server resolves Amazon-provided private DNS hostnames if you've enabled DNS resolution and DNS hostnames in your VPC. If you are not using the Amazon-provided DNS server in your VPC, your custom domain name servers must resolve the hostname as appropriate.

", "locationName":"privateDnsName" }, "PrivateIpAddress":{ @@ -11100,7 +11864,7 @@ }, "EbsOptimized":{ "shape":"Boolean", - "documentation":"

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

", + "documentation":"

Indicates whether the instance is optimized for Amazon EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

", "locationName":"ebsOptimized" }, "EnaSupport":{ @@ -11120,9 +11884,14 @@ }, "InstanceLifecycle":{ "shape":"InstanceLifecycleType", - "documentation":"

Indicates whether this is a Spot instance or a Scheduled Instance.

", + "documentation":"

Indicates whether this is a Spot Instance or a Scheduled Instance.

", "locationName":"instanceLifecycle" }, + "ElasticGpuAssociations":{ + "shape":"ElasticGpuAssociationList", + "documentation":"

The Elastic GPU associated with the instance.

", + "locationName":"elasticGpuAssociationSet" + }, "NetworkInterfaces":{ "shape":"InstanceNetworkInterfaceList", "documentation":"

[EC2-VPC] One or more network interfaces for the instance.

", @@ -11130,7 +11899,7 @@ }, "RootDeviceName":{ "shape":"String", - "documentation":"

The root device name (for example, /dev/sda1 or /dev/xvda).

", + "documentation":"

The device name of the root device volume (for example, /dev/sda1).

", "locationName":"rootDeviceName" }, "RootDeviceType":{ @@ -11145,12 +11914,12 @@ }, "SourceDestCheck":{ "shape":"Boolean", - "documentation":"

Specifies whether to enable an instance launched in a VPC to perform NAT. This controls whether source/destination checking is enabled on the instance. A value of true means checking is enabled, and false means checking is disabled. The value must be false for the instance to perform NAT. For more information, see NAT Instances in the Amazon Virtual Private Cloud User Guide.

", + "documentation":"

Specifies whether to enable an instance launched in a VPC to perform NAT. This controls whether source/destination checking is enabled on the instance. A value of true means that checking is enabled, and false means that checking is disabled. The value must be false for the instance to perform NAT. For more information, see NAT Instances in the Amazon Virtual Private Cloud User Guide.

", "locationName":"sourceDestCheck" }, "SpotInstanceRequestId":{ "shape":"String", - "documentation":"

If the request is a Spot instance request, the ID of the request.

", + "documentation":"

If the request is a Spot Instance request, the ID of the request.

", "locationName":"spotInstanceRequestId" }, "SriovNetSupport":{ @@ -11201,7 +11970,7 @@ }, "EbsOptimized":{ "shape":"AttributeBooleanValue", - "documentation":"

Indicates whether the instance is optimized for EBS I/O.

", + "documentation":"

Indicates whether the instance is optimized for Amazon EBS I/O.

", "locationName":"ebsOptimized" }, "InstanceId":{ @@ -11236,12 +12005,12 @@ }, "RootDeviceName":{ "shape":"AttributeValue", - "documentation":"

The name of the root device (for example, /dev/sda1 or /dev/xvda).

", + "documentation":"

The device name of the root device volume (for example, /dev/sda1).

", "locationName":"rootDeviceName" }, "SourceDestCheck":{ "shape":"AttributeBooleanValue", - "documentation":"

Indicates whether source/destination checking is enabled. A value of true means checking is enabled, and false means checking is disabled. This value must be false for a NAT instance to perform NAT.

", + "documentation":"

Indicates whether source/destination checking is enabled. A value of true means that checking is enabled, and false means that checking is disabled. This value must be false for a NAT instance to perform NAT.

", "locationName":"sourceDestCheck" }, "SriovNetSupport":{ @@ -11281,7 +12050,7 @@ "members":{ "DeviceName":{ "shape":"String", - "documentation":"

The device name exposed to the instance (for example, /dev/sdh or xvdh).

", + "documentation":"

The device name (for example, /dev/sdh or xvdh).

", "locationName":"deviceName" }, "Ebs":{ @@ -11304,7 +12073,7 @@ "members":{ "DeviceName":{ "shape":"String", - "documentation":"

The device name exposed to the instance (for example, /dev/sdh or xvdh).

", + "documentation":"

The device name (for example, /dev/sdh or xvdh).

", "locationName":"deviceName" }, "Ebs":{ @@ -11413,6 +12182,13 @@ "locationName":"InstanceId" } }, + "InstanceInterruptionBehavior":{ + "type":"string", + "enum":[ + "stop", + "terminate" + ] + }, "InstanceIpv6Address":{ "type":"structure", "members":{ @@ -11933,6 +12709,12 @@ "r4.16xlarge", "x1.16xlarge", "x1.32xlarge", + "x1e.xlarge", + "x1e.2xlarge", + "x1e.4xlarge", + "x1e.8xlarge", + "x1e.16xlarge", + "x1e.32xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", @@ -11957,14 +12739,26 @@ "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", + "c5.large", + "c5.xlarge", + "c5.2xlarge", + "c5.4xlarge", + "c5.9xlarge", + "c5.18xlarge", "cc1.4xlarge", "cc2.8xlarge", "g2.2xlarge", "g2.8xlarge", + "g3.4xlarge", + "g3.8xlarge", + "g3.16xlarge", "cg1.4xlarge", "p2.xlarge", "p2.8xlarge", "p2.16xlarge", + "p3.2xlarge", + "p3.8xlarge", + "p3.16xlarge", "d2.xlarge", "d2.2xlarge", "d2.4xlarge", @@ -11978,6 +12772,13 @@ "member":{"shape":"InstanceType"} }, "Integer":{"type":"integer"}, + "InterfacePermissionType":{ + "type":"string", + "enum":[ + "INSTANCE-ATTACH", + "EIP-ASSOCIATE" + ] + }, "InternetGateway":{ "type":"structure", "members":{ @@ -12034,7 +12835,7 @@ "members":{ "FromPort":{ "shape":"Integer", - "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type number. A value of -1 indicates all ICMP/ICMPv6 types.

", + "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 type number. A value of -1 indicates all ICMP/ICMPv6 types. If you specify all ICMP/ICMPv6 types, you must specify all codes.

", "locationName":"fromPort" }, "IpProtocol":{ @@ -12059,7 +12860,7 @@ }, "ToPort":{ "shape":"Integer", - "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code. A value of -1 indicates all ICMP/ICMPv6 codes for the specified ICMP type.

", + "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP/ICMPv6 code. A value of -1 indicates all ICMP/ICMPv6 codes for the specified ICMP type. If you specify all ICMP/ICMPv6 types, you must specify all codes.

", "locationName":"toPort" }, "UserIdGroupPairs":{ @@ -12068,7 +12869,7 @@ "locationName":"groups" } }, - "documentation":"

Describes a security group rule.

" + "documentation":"

Describes a set of permissions for a security group rule.

" }, "IpPermissionList":{ "type":"list", @@ -12082,8 +12883,13 @@ "members":{ "CidrIp":{ "shape":"String", - "documentation":"

The IPv4 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv4 address, use the /32 prefix.

", + "documentation":"

The IPv4 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv4 address, use the /32 prefix length.

", "locationName":"cidrIp" + }, + "Description":{ + "shape":"String", + "documentation":"

A description for the security group rule that references this IPv4 address range.

Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*

", + "locationName":"description" } }, "documentation":"

Describes an IPv4 range.

" @@ -12133,8 +12939,13 @@ "members":{ "CidrIpv6":{ "shape":"String", - "documentation":"

The IPv6 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv6 address, use the /128 prefix.

", + "documentation":"

The IPv6 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv6 address, use the /128 prefix length.

", "locationName":"cidrIpv6" + }, + "Description":{ + "shape":"String", + "documentation":"

A description for the security group rule that references this IPv6 address range.

Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*

", + "locationName":"description" } }, "documentation":"

[EC2-VPC only] Describes an IPv6 range.

" @@ -12254,7 +13065,7 @@ }, "BlockDeviceMappings":{ "shape":"BlockDeviceMappingList", - "documentation":"

One or more block device mapping entries.

Although you can specify encrypted EBS volumes in this block device mapping for your Spot Instances, these volumes are not encrypted.

", + "documentation":"

One or more block device mapping entries.

", "locationName":"blockDeviceMapping" }, "EbsOptimized":{ @@ -12340,12 +13151,145 @@ "closed" ] }, + "LoadBalancersConfig":{ + "type":"structure", + "members":{ + "ClassicLoadBalancersConfig":{ + "shape":"ClassicLoadBalancersConfig", + "documentation":"

The Classic Load Balancers.

", + "locationName":"classicLoadBalancersConfig" + }, + "TargetGroupsConfig":{ + "shape":"TargetGroupsConfig", + "documentation":"

The target groups.

", + "locationName":"targetGroupsConfig" + } + }, + "documentation":"

Describes the Classic Load Balancers and target groups to attach to a Spot fleet request.

" + }, + "LoadPermission":{ + "type":"structure", + "members":{ + "UserId":{ + "shape":"String", + "documentation":"

The AWS account ID.

", + "locationName":"userId" + }, + "Group":{ + "shape":"PermissionGroup", + "documentation":"

The name of the group.

", + "locationName":"group" + } + }, + "documentation":"

Describes a load permission.

" + }, + "LoadPermissionList":{ + "type":"list", + "member":{ + "shape":"LoadPermission", + "locationName":"item" + } + }, + "LoadPermissionListRequest":{ + "type":"list", + "member":{ + "shape":"LoadPermissionRequest", + "locationName":"item" + } + }, + "LoadPermissionModifications":{ + "type":"structure", + "members":{ + "Add":{ + "shape":"LoadPermissionListRequest", + "documentation":"

The load permissions to add.

" + }, + "Remove":{ + "shape":"LoadPermissionListRequest", + "documentation":"

The load permissions to remove.

" + } + }, + "documentation":"

Describes modifications to the load permissions of an Amazon FPGA image (AFI).

" + }, + "LoadPermissionRequest":{ + "type":"structure", + "members":{ + "Group":{ + "shape":"PermissionGroup", + "documentation":"

The name of the group.

" + }, + "UserId":{ + "shape":"String", + "documentation":"

The AWS account ID.

" + } + }, + "documentation":"

Describes a load permission.

" + }, "Long":{"type":"long"}, "MaxResults":{ "type":"integer", "max":255, "min":5 }, + "ModifyFpgaImageAttributeRequest":{ + "type":"structure", + "required":["FpgaImageId"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the AFI.

" + }, + "Attribute":{ + "shape":"FpgaImageAttributeName", + "documentation":"

The name of the attribute.

" + }, + "OperationType":{ + "shape":"OperationType", + "documentation":"

The operation type.

" + }, + "UserIds":{ + "shape":"UserIdStringList", + "documentation":"

One or more AWS account IDs. This parameter is valid only when modifying the loadPermission attribute.

", + "locationName":"UserId" + }, + "UserGroups":{ + "shape":"UserGroupStringList", + "documentation":"

One or more user groups. This parameter is valid only when modifying the loadPermission attribute.

", + "locationName":"UserGroup" + }, + "ProductCodes":{ + "shape":"ProductCodeStringList", + "documentation":"

One or more product codes. After you add a product code to an AFI, it can't be removed. This parameter is valid only when modifying the productCodes attribute.

", + "locationName":"ProductCode" + }, + "LoadPermission":{ + "shape":"LoadPermissionModifications", + "documentation":"

The load permission for the AFI.

" + }, + "Description":{ + "shape":"String", + "documentation":"

A description for the AFI.

" + }, + "Name":{ + "shape":"String", + "documentation":"

A name for the AFI.

" + } + } + }, + "ModifyFpgaImageAttributeResult":{ + "type":"structure", + "members":{ + "FpgaImageAttribute":{ + "shape":"FpgaImageAttribute", + "documentation":"

Information about the attribute.

", + "locationName":"fpgaImageAttribute" + } + } + }, "ModifyHostsRequest":{ "type":"structure", "required":[ @@ -12432,11 +13376,11 @@ "members":{ "Attribute":{ "shape":"String", - "documentation":"

The name of the attribute to modify.

" + "documentation":"

The name of the attribute to modify. The valid values are description, launchPermission, and productCodes.

" }, "Description":{ "shape":"AttributeValue", - "documentation":"

A description for the AMI.

" + "documentation":"

A new description for the AMI.

" }, "ImageId":{ "shape":"String", @@ -12444,30 +13388,30 @@ }, "LaunchPermission":{ "shape":"LaunchPermissionModifications", - "documentation":"

A launch permission modification.

" + "documentation":"

A new launch permission for the AMI.

" }, "OperationType":{ "shape":"OperationType", - "documentation":"

The operation type.

" + "documentation":"

The operation type. This parameter can be used only when the Attribute parameter is launchPermission.

" }, "ProductCodes":{ "shape":"ProductCodeStringList", - "documentation":"

One or more product codes. After you add a product code to an AMI, it can't be removed. This is only valid when modifying the productCodes attribute.

", + "documentation":"

One or more DevPay product codes. After you add a product code to an AMI, it can't be removed.

", "locationName":"ProductCode" }, "UserGroups":{ "shape":"UserGroupStringList", - "documentation":"

One or more user groups. This is only valid when modifying the launchPermission attribute.

", + "documentation":"

One or more user groups. This parameter can be used only when the Attribute parameter is launchPermission.

", "locationName":"UserGroup" }, "UserIds":{ "shape":"UserIdStringList", - "documentation":"

One or more AWS account IDs. This is only valid when modifying the launchPermission attribute.

", + "documentation":"

One or more AWS account IDs. This parameter can be used only when the Attribute parameter is launchPermission.

", "locationName":"UserId" }, "Value":{ "shape":"String", - "documentation":"

The value of the attribute being modified. This is only valid when modifying the description attribute.

" + "documentation":"

The value of the attribute being modified. This parameter can be used only when the Attribute parameter is description or productCodes.

" }, "DryRun":{ "shape":"Boolean", @@ -12483,7 +13427,7 @@ "members":{ "SourceDestCheck":{ "shape":"AttributeBooleanValue", - "documentation":"

Specifies whether source/destination checking is enabled. A value of true means that checking is enabled, and false means checking is disabled. This value must be false for a NAT instance to perform NAT.

" + "documentation":"

Specifies whether source/destination checking is enabled. A value of true means that checking is enabled, and false means that checking is disabled. This value must be false for a NAT instance to perform NAT.

" }, "Attribute":{ "shape":"InstanceAttributeName", @@ -12497,7 +13441,7 @@ }, "DisableApiTermination":{ "shape":"AttributeBooleanValue", - "documentation":"

If the value is true, you can't terminate the instance using the Amazon EC2 console, CLI, or API; otherwise, you can. You cannot use this paramater for Spot Instances.

", + "documentation":"

If the value is true, you can't terminate the instance using the Amazon EC2 console, CLI, or API; otherwise, you can. You cannot use this parameter for Spot Instances.

", "locationName":"disableApiTermination" }, "DryRun":{ @@ -12507,7 +13451,7 @@ }, "EbsOptimized":{ "shape":"AttributeBooleanValue", - "documentation":"

Specifies whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

", + "documentation":"

Specifies whether the instance is optimized for Amazon EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS Optimized instance.

", "locationName":"ebsOptimized" }, "EnaSupport":{ @@ -12552,7 +13496,7 @@ }, "UserData":{ "shape":"BlobAttributeValue", - "documentation":"

Changes the instance's user data to the specified value. If you are using an AWS SDK or command line tool, Base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide Base64-encoded text.

", + "documentation":"

Changes the instance's user data to the specified value. If you are using an AWS SDK or command line tool, base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide base64-encoded text.

", "locationName":"userData" }, "Value":{ @@ -12842,31 +13786,55 @@ "type":"structure", "required":["VpcEndpointId"], "members":{ - "AddRouteTableIds":{ - "shape":"ValueStringList", - "documentation":"

One or more route tables IDs to associate with the endpoint.

", - "locationName":"AddRouteTableId" - }, "DryRun":{ "shape":"Boolean", "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" }, + "VpcEndpointId":{ + "shape":"String", + "documentation":"

The ID of the endpoint.

" + }, + "ResetPolicy":{ + "shape":"Boolean", + "documentation":"

(Gateway endpoint) Specify true to reset the policy document to the default policy. The default policy allows full access to the service.

" + }, "PolicyDocument":{ "shape":"String", - "documentation":"

A policy document to attach to the endpoint. The policy must be in valid JSON format.

" + "documentation":"

(Gateway endpoint) A policy document to attach to the endpoint. The policy must be in valid JSON format.

" + }, + "AddRouteTableIds":{ + "shape":"ValueStringList", + "documentation":"

(Gateway endpoint) One or more route tables IDs to associate with the endpoint.

", + "locationName":"AddRouteTableId" }, "RemoveRouteTableIds":{ "shape":"ValueStringList", - "documentation":"

One or more route table IDs to disassociate from the endpoint.

", + "documentation":"

(Gateway endpoint) One or more route table IDs to disassociate from the endpoint.

", "locationName":"RemoveRouteTableId" }, - "ResetPolicy":{ - "shape":"Boolean", - "documentation":"

Specify true to reset the policy document to the default policy. The default policy allows access to the service.

" + "AddSubnetIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more subnet IDs in which to serve the endpoint.

", + "locationName":"AddSubnetId" }, - "VpcEndpointId":{ - "shape":"String", - "documentation":"

The ID of the endpoint.

" + "RemoveSubnetIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more subnets IDs in which to remove the endpoint.

", + "locationName":"RemoveSubnetId" + }, + "AddSecurityGroupIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more security group IDs to associate with the network interface.

", + "locationName":"AddSecurityGroupId" + }, + "RemoveSecurityGroupIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more security group IDs to disassociate from the network interface.

", + "locationName":"RemoveSecurityGroupId" + }, + "PrivateDnsEnabled":{ + "shape":"Boolean", + "documentation":"

(Interface endpoint) Indicate whether a private hosted zone is associated with the VPC.

" } }, "documentation":"

Contains the parameters for ModifyVpcEndpoint.

" @@ -12879,8 +13847,7 @@ "documentation":"

Returns true if the request succeeds; otherwise, it returns an error.

", "locationName":"return" } - }, - "documentation":"

Contains the output of ModifyVpcEndpoint.

" + } }, "ModifyVpcPeeringConnectionOptionsRequest":{ "type":"structure", @@ -12919,6 +13886,39 @@ } } }, + "ModifyVpcTenancyRequest":{ + "type":"structure", + "required":[ + "VpcId", + "InstanceTenancy" + ], + "members":{ + "VpcId":{ + "shape":"String", + "documentation":"

The ID of the VPC.

" + }, + "InstanceTenancy":{ + "shape":"VpcTenancy", + "documentation":"

The instance tenancy attribute for the VPC.

" + }, + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the operation, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + } + }, + "documentation":"

Contains the parameters for ModifyVpcTenancy.

" + }, + "ModifyVpcTenancyResult":{ + "type":"structure", + "members":{ + "ReturnValue":{ + "shape":"Boolean", + "documentation":"

Returns true if the request succeeds; otherwise, returns an error.

", + "locationName":"return" + } + }, + "documentation":"

Contains the output of ModifyVpcTenancy.

" + }, "MonitorInstancesRequest":{ "type":"structure", "required":["InstanceIds"], @@ -13082,6 +14082,11 @@ "shape":"String", "documentation":"

The ID of the VPC in which the NAT gateway is located.

", "locationName":"vpcId" + }, + "Tags":{ + "shape":"TagList", + "documentation":"

The tags for the NAT gateway.

", + "locationName":"tagSet" } }, "documentation":"

Describes a NAT gateway.

" @@ -13495,6 +14500,78 @@ "locationName":"item" } }, + "NetworkInterfacePermission":{ + "type":"structure", + "members":{ + "NetworkInterfacePermissionId":{ + "shape":"String", + "documentation":"

The ID of the network interface permission.

", + "locationName":"networkInterfacePermissionId" + }, + "NetworkInterfaceId":{ + "shape":"String", + "documentation":"

The ID of the network interface.

", + "locationName":"networkInterfaceId" + }, + "AwsAccountId":{ + "shape":"String", + "documentation":"

The AWS account ID.

", + "locationName":"awsAccountId" + }, + "AwsService":{ + "shape":"String", + "documentation":"

The AWS service.

", + "locationName":"awsService" + }, + "Permission":{ + "shape":"InterfacePermissionType", + "documentation":"

The type of permission.

", + "locationName":"permission" + }, + "PermissionState":{ + "shape":"NetworkInterfacePermissionState", + "documentation":"

Information about the state of the permission.

", + "locationName":"permissionState" + } + }, + "documentation":"

Describes a permission for a network interface.

" + }, + "NetworkInterfacePermissionIdList":{ + "type":"list", + "member":{"shape":"String"} + }, + "NetworkInterfacePermissionList":{ + "type":"list", + "member":{ + "shape":"NetworkInterfacePermission", + "locationName":"item" + } + }, + "NetworkInterfacePermissionState":{ + "type":"structure", + "members":{ + "State":{ + "shape":"NetworkInterfacePermissionStateCode", + "documentation":"

The state of the permission.

", + "locationName":"state" + }, + "StatusMessage":{ + "shape":"String", + "documentation":"

A status message, if applicable.

", + "locationName":"statusMessage" + } + }, + "documentation":"

Describes the state of a network interface permission.

" + }, + "NetworkInterfacePermissionStateCode":{ + "type":"string", + "enum":[ + "pending", + "granted", + "revoking", + "revoked" + ] + }, "NetworkInterfacePrivateIpAddress":{ "type":"structure", "members":{ @@ -13813,6 +14890,11 @@ "PrefixListId":{ "type":"structure", "members":{ + "Description":{ + "shape":"String", + "documentation":"

A description for the security group rule that references this prefix list ID.

Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*

", + "locationName":"description" + }, "PrefixListId":{ "shape":"String", "documentation":"

The ID of the prefix.

", @@ -14265,7 +15347,10 @@ }, "PurchaseSet":{ "type":"list", - "member":{"shape":"Purchase"} + "member":{ + "shape":"Purchase", + "locationName":"item" + } }, "PurchasedScheduledInstanceSet":{ "type":"list", @@ -14419,7 +15504,7 @@ }, "RootDeviceName":{ "shape":"String", - "documentation":"

The name of the root device (for example, /dev/sda1, or /dev/xvda).

", + "documentation":"

The device name of the root device volume (for example, /dev/sda1).

", "locationName":"rootDeviceName" }, "SriovNetSupport":{ @@ -14784,7 +15869,7 @@ }, "ReasonCodes":{ "shape":"ReasonCodesList", - "documentation":"

One or more reason codes that describes the health state of your instance.

", + "documentation":"

One or more reason codes that describe the health state of your instance.

", "locationName":"reasonCode" }, "StartTime":{ @@ -14907,6 +15992,10 @@ "shape":"DateTime", "documentation":"

The end date of the request. If this is a one-time request, the request remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date and time is reached.

Default: The request is effective indefinitely.

", "locationName":"validUntil" + }, + "InstanceInterruptionBehavior":{ + "shape":"InstanceInterruptionBehavior", + "documentation":"

Indicates whether a Spot instance stops or terminates when it is interrupted.

" } }, "documentation":"

Contains the parameters for RequestSpotInstances.

" @@ -14942,7 +16031,7 @@ }, "BlockDeviceMappings":{ "shape":"BlockDeviceMappingList", - "documentation":"

One or more block device mapping entries.

Although you can specify encrypted EBS volumes in this block device mapping for your Spot Instances, these volumes are not encrypted.

", + "documentation":"

One or more block device mapping entries. You can't specify both a snapshot ID and an encryption value. This is because only blank volumes can be encrypted on creation. If a snapshot is the basis for a volume, it is not blank and its encryption status is used for the volume encryption status.

", "locationName":"blockDeviceMapping" }, "EbsOptimized":{ @@ -15540,6 +16629,38 @@ "locationName":"item" } }, + "ResetFpgaImageAttributeName":{ + "type":"string", + "enum":["loadPermission"] + }, + "ResetFpgaImageAttributeRequest":{ + "type":"structure", + "required":["FpgaImageId"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "FpgaImageId":{ + "shape":"String", + "documentation":"

The ID of the AFI.

" + }, + "Attribute":{ + "shape":"ResetFpgaImageAttributeName", + "documentation":"

The attribute.

" + } + } + }, + "ResetFpgaImageAttributeResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "documentation":"

Is true if the request succeeds, and an error otherwise.

", + "locationName":"return" + } + } + }, "ResetImageAttributeName":{ "type":"string", "enum":["launchPermission"] @@ -15730,37 +16851,37 @@ }, "IpPermissions":{ "shape":"IpPermissionList", - "documentation":"

A set of IP permissions. You can't specify a destination security group and a CIDR IP address range.

", + "documentation":"

One or more sets of IP permissions. You can't specify a destination security group and a CIDR IP address range in the same set of permissions.

", "locationName":"ipPermissions" }, "CidrIp":{ "shape":"String", - "documentation":"

The CIDR IP address range. We recommend that you specify the CIDR range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the CIDR.

", "locationName":"cidrIp" }, "FromPort":{ "shape":"Integer", - "documentation":"

The start of port range for the TCP and UDP protocols, or an ICMP type number. We recommend that you specify the port range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the port.

", "locationName":"fromPort" }, "IpProtocol":{ "shape":"String", - "documentation":"

The IP protocol name or number. We recommend that you specify the protocol in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the protocol name or number.

", "locationName":"ipProtocol" }, "ToPort":{ "shape":"Integer", - "documentation":"

The end of port range for the TCP and UDP protocols, or an ICMP type number. We recommend that you specify the port range in a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify the port.

", "locationName":"toPort" }, "SourceSecurityGroupName":{ "shape":"String", - "documentation":"

The name of a destination security group. To revoke outbound access to a destination security group, we recommend that you use a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify a destination security group.

", "locationName":"sourceSecurityGroupName" }, "SourceSecurityGroupOwnerId":{ "shape":"String", - "documentation":"

The AWS account number for a destination security group. To revoke outbound access to a destination security group, we recommend that you use a set of IP permissions instead.

", + "documentation":"

Not supported. Use a set of IP permissions to specify a destination security group.

", "locationName":"sourceSecurityGroupOwnerId" } }, @@ -15779,15 +16900,15 @@ }, "GroupId":{ "shape":"String", - "documentation":"

The ID of the security group. Required for a security group in a nondefault VPC.

" + "documentation":"

The ID of the security group. You must specify either the security group ID or the security group name in the request. For security groups in a nondefault VPC, you must specify the security group ID.

" }, "GroupName":{ "shape":"String", - "documentation":"

[EC2-Classic, default VPC] The name of the security group.

" + "documentation":"

[EC2-Classic, default VPC] The name of the security group. You must specify either the security group ID or the security group name in the request.

" }, "IpPermissions":{ "shape":"IpPermissionList", - "documentation":"

A set of IP permissions. You can't specify a source security group and a CIDR IP address range.

" + "documentation":"

One or more sets of IP permissions. You can't specify a source security group and a CIDR IP address range in the same set of permissions.

" }, "IpProtocol":{ "shape":"String", @@ -16006,7 +17127,7 @@ "members":{ "BlockDeviceMappings":{ "shape":"BlockDeviceMappingRequestList", - "documentation":"

The block device mapping.

Supplying both a snapshot ID and an encryption value as arguments for block-device mapping results in an error. This is because only blank volumes can be encrypted on start, and these are not created from a snapshot. If a snapshot is the basis for the volume, it contains data by definition and its encryption status cannot be changed using this action.

", + "documentation":"

One or more block device mapping entries. You can't specify both a snapshot ID and an encryption value. This is because only blank volumes can be encrypted on creation. If a snapshot is the basis for a volume, it is not blank and its encryption status is used for the volume encryption status.

", "locationName":"BlockDeviceMapping" }, "ImageId":{ @@ -16070,7 +17191,7 @@ }, "UserData":{ "shape":"String", - "documentation":"

The user data to make available to the instance. For more information, see Running Commands on Your Linux Instance at Launch (Linux) and Adding User Data (Windows). If you are using an AWS SDK or command line tool, Base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide Base64-encoded text.

" + "documentation":"

The user data to make available to the instance. For more information, see Running Commands on Your Linux Instance at Launch (Linux) and Adding User Data (Windows). If you are using a command line tool, base64-encoding is performed for you, and you can load the text from a file. Otherwise, you must provide base64-encoded text.

" }, "AdditionalInfo":{ "shape":"String", @@ -16094,7 +17215,7 @@ }, "EbsOptimized":{ "shape":"Boolean", - "documentation":"

Indicates whether the instance is optimized for EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.

Default: false

", + "documentation":"

Indicates whether the instance is optimized for Amazon EBS I/O. This optimization provides dedicated throughput to Amazon EBS and an optimized configuration stack to provide optimal Amazon EBS I/O performance. This optimization isn't available with all instance types. Additional usage charges apply when using an EBS-optimized instance.

Default: false

", "locationName":"ebsOptimized" }, "IamInstanceProfile":{ @@ -16117,6 +17238,10 @@ "documentation":"

[EC2-VPC] The primary IPv4 address. You must specify a value from the IPv4 address range of the subnet.

Only one private IP address can be designated as primary. You can't specify this option if you've specified the option to designate a private IP address as the primary IP address in a network interface specification. You cannot specify this option if you're launching more than one instance in the request.

", "locationName":"privateIpAddress" }, + "ElasticGpuSpecification":{ + "shape":"ElasticGpuSpecifications", + "documentation":"

An Elastic GPU to associate with the instance.

" + }, "TagSpecifications":{ "shape":"TagSpecificationList", "documentation":"

The tags to apply to the resources during launch. You can tag instances and volumes. The specified tags are applied to all instances or volumes that are created during launch.

", @@ -16433,7 +17558,7 @@ "members":{ "DeviceName":{ "shape":"String", - "documentation":"

The device name exposed to the instance (for example, /dev/sdh or xvdh).

" + "documentation":"

The device name (for example, /dev/sdh or xvdh).

" }, "Ebs":{ "shape":"ScheduledInstancesEbs", @@ -16445,7 +17570,7 @@ }, "VirtualName":{ "shape":"String", - "documentation":"

The virtual device name (ephemeralN). Instance store volumes are numbered starting from 0. An instance type with two available instance store volumes can specify mappings for ephemeral0 and ephemeral1.The number of available instance store volumes depends on the instance type. After you connect to the instance, you must mount the volume.

Constraints: For M3 instances, you must specify instance store volumes in the block device mapping for the instance. When you launch an M3 instance, we ignore any instance store volumes specified in the block device mapping for the AMI.

" + "documentation":"

The virtual device name (ephemeralN). Instance store volumes are numbered starting from 0. An instance type with two available instance store volumes can specify mappings for ephemeral0 and ephemeral1. The number of available instance store volumes depends on the instance type. After you connect to the instance, you must mount the volume.

Constraints: For M3 instances, you must specify instance store volumes in the block device mapping for the instance. When you launch an M3 instance, we ignore any instance store volumes specified in the block device mapping for the AMI.

" } }, "documentation":"

Describes a block device mapping for a Scheduled Instance.

" @@ -16746,6 +17871,22 @@ "locationName":"SecurityGroupId" } }, + "SecurityGroupIdentifier":{ + "type":"structure", + "members":{ + "GroupId":{ + "shape":"String", + "documentation":"

The ID of the security group.

", + "locationName":"groupId" + }, + "GroupName":{ + "shape":"String", + "documentation":"

The name of the security group.

", + "locationName":"groupName" + } + }, + "documentation":"

Describes a security group.

" + }, "SecurityGroupList":{ "type":"list", "member":{ @@ -16792,6 +17933,84 @@ "locationName":"SecurityGroup" } }, + "ServiceDetail":{ + "type":"structure", + "members":{ + "ServiceName":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the service.

", + "locationName":"serviceName" + }, + "ServiceType":{ + "shape":"ServiceTypeDetailSet", + "documentation":"

The type of service.

", + "locationName":"serviceType" + }, + "AvailabilityZones":{ + "shape":"ValueStringList", + "documentation":"

The Availability Zones in which the service is available.

", + "locationName":"availabilityZoneSet" + }, + "Owner":{ + "shape":"String", + "documentation":"

The AWS account ID of the service owner.

", + "locationName":"owner" + }, + "BaseEndpointDnsNames":{ + "shape":"ValueStringList", + "documentation":"

The DNS names for the service.

", + "locationName":"baseEndpointDnsNameSet" + }, + "PrivateDnsName":{ + "shape":"String", + "documentation":"

The private DNS name for the service.

", + "locationName":"privateDnsName" + }, + "VpcEndpointPolicySupported":{ + "shape":"Boolean", + "documentation":"

Indicates whether the service supports endpoint policies.

", + "locationName":"vpcEndpointPolicySupported" + }, + "AcceptanceRequired":{ + "shape":"Boolean", + "documentation":"

Indicates whether VPC endpoint connection requests to the service must be accepted by the service owner.

", + "locationName":"acceptanceRequired" + } + }, + "documentation":"

Describes a service.

" + }, + "ServiceDetailSet":{ + "type":"list", + "member":{ + "shape":"ServiceDetail", + "locationName":"item" + } + }, + "ServiceType":{ + "type":"string", + "enum":[ + "Interface", + "Gateway" + ] + }, + "ServiceTypeDetail":{ + "type":"structure", + "members":{ + "ServiceType":{ + "shape":"ServiceType", + "documentation":"

The type of service.

", + "locationName":"serviceType" + } + }, + "documentation":"

Describes the type of service for a VPC endpoint.

" + }, + "ServiceTypeDetailSet":{ + "type":"list", + "member":{ + "shape":"ServiceTypeDetail", + "locationName":"item" + } + }, "ShutdownBehavior":{ "type":"string", "enum":[ @@ -17118,7 +18337,7 @@ }, "BlockDeviceMappings":{ "shape":"BlockDeviceMappingList", - "documentation":"

One or more block device mapping entries.

", + "documentation":"

One or more block device mapping entries. You can't specify both a snapshot ID and an encryption value. This is because only blank volumes can be encrypted on creation. If a snapshot is the basis for a volume, it is not blank and its encryption status is used for the volume encryption status.

", "locationName":"blockDeviceMapping" }, "EbsOptimized":{ @@ -17190,6 +18409,11 @@ "shape":"Double", "documentation":"

The number of units provided by the specified instance type. These are the same units that you chose to set the target capacity in terms (instances or a performance characteristic such as vCPUs, memory, or I/O).

If the target capacity divided by this value is not a whole number, we round the number of instances to the next whole number. If this value is not specified, the default is 1.

", "locationName":"weightedCapacity" + }, + "TagSpecifications":{ + "shape":"SpotFleetTagSpecificationList", + "documentation":"

The tags to apply during creation.

", + "locationName":"tagSpecificationSet" } }, "documentation":"

Describes the launch specification for one or more Spot instances.

" @@ -17315,6 +18539,16 @@ "shape":"Boolean", "documentation":"

Indicates whether Spot fleet should replace unhealthy instances.

", "locationName":"replaceUnhealthyInstances" + }, + "InstanceInterruptionBehavior":{ + "shape":"InstanceInterruptionBehavior", + "documentation":"

Indicates whether a Spot instance stops or terminates when it is interrupted.

", + "locationName":"instanceInterruptionBehavior" + }, + "LoadBalancersConfig":{ + "shape":"LoadBalancersConfig", + "documentation":"

One or more Classic Load Balancers and target groups to attach to the Spot fleet request. Spot fleet registers the running Spot instances with the specified Classic Load Balancers and target groups.

With Network Load Balancers, Spot fleet cannot register instances that have the following instance types: C1, CC1, CC2, CG1, CG2, CR1, CS1, G1, G2, HI1, HS1, M1, M2, M3, and T1.

", + "locationName":"loadBalancersConfig" } }, "documentation":"

Describes the configuration of a Spot fleet request.

" @@ -17326,6 +18560,29 @@ "locationName":"item" } }, + "SpotFleetTagSpecification":{ + "type":"structure", + "members":{ + "ResourceType":{ + "shape":"ResourceType", + "documentation":"

The type of resource. Currently, the only resource type that is supported is instance.

", + "locationName":"resourceType" + }, + "Tags":{ + "shape":"TagList", + "documentation":"

The tags.

", + "locationName":"tag" + } + }, + "documentation":"

The tags for a Spot fleet resource.

" + }, + "SpotFleetTagSpecificationList":{ + "type":"list", + "member":{ + "shape":"SpotFleetTagSpecification", + "locationName":"item" + } + }, "SpotInstanceRequest":{ "type":"structure", "members":{ @@ -17418,6 +18675,11 @@ "shape":"DateTime", "documentation":"

The end date of the request, in UTC format (for example, YYYY-MM-DDTHH:MM:SSZ). If this is a one-time request, it remains active until all instances launch, the request is canceled, or this date is reached. If the request is persistent, it remains active until it is canceled or this date is reached.

", "locationName":"validUntil" + }, + "InstanceInterruptionBehavior":{ + "shape":"InstanceInterruptionBehavior", + "documentation":"

Indicates whether a Spot instance stops or terminates when it is interrupted.

", + "locationName":"instanceInterruptionBehavior" } }, "documentation":"

Describes a Spot instance request.

" @@ -17672,10 +18934,14 @@ "State":{ "type":"string", "enum":[ + "PendingAcceptance", "Pending", "Available", "Deleting", - "Deleted" + "Deleted", + "Rejected", + "Failed", + "Expired" ] }, "StateReason":{ @@ -17688,7 +18954,7 @@ }, "Message":{ "shape":"String", - "documentation":"

The message for the state change.

", + "documentation":"

The message for the state change.

", "locationName":"message" } }, @@ -18037,6 +19303,39 @@ "locationName":"TargetConfigurationRequest" } }, + "TargetGroup":{ + "type":"structure", + "required":["Arn"], + "members":{ + "Arn":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the target group.

", + "locationName":"arn" + } + }, + "documentation":"

Describes a load balancer target group.

" + }, + "TargetGroups":{ + "type":"list", + "member":{ + "shape":"TargetGroup", + "locationName":"item" + }, + "max":5, + "min":1 + }, + "TargetGroupsConfig":{ + "type":"structure", + "required":["TargetGroups"], + "members":{ + "TargetGroups":{ + "shape":"TargetGroups", + "documentation":"

One or more target groups.

", + "locationName":"targetGroups" + } + }, + "documentation":"

Describes the target groups to attach to a Spot fleet. Spot fleet registers the running Spot instances with these target groups.

" + }, "TargetReservationValue":{ "type":"structure", "members":{ @@ -18111,6 +19410,13 @@ "ALL" ] }, + "TunnelOptionsList":{ + "type":"list", + "member":{ + "shape":"VpnTunnelOptionsSpecification", + "locationName":"item" + } + }, "UnassignIpv6AddressesRequest":{ "type":"structure", "required":[ @@ -18244,6 +19550,74 @@ "locationName":"item" } }, + "UpdateSecurityGroupRuleDescriptionsEgressRequest":{ + "type":"structure", + "required":["IpPermissions"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "GroupId":{ + "shape":"String", + "documentation":"

The ID of the security group. You must specify either the security group ID or the security group name in the request. For security groups in a nondefault VPC, you must specify the security group ID.

" + }, + "GroupName":{ + "shape":"String", + "documentation":"

[Default VPC] The name of the security group. You must specify either the security group ID or the security group name in the request.

" + }, + "IpPermissions":{ + "shape":"IpPermissionList", + "documentation":"

The IP permissions for the security group rule.

" + } + }, + "documentation":"

Contains the parameters for UpdateSecurityGroupRuleDescriptionsEgress.

" + }, + "UpdateSecurityGroupRuleDescriptionsEgressResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "documentation":"

Returns true if the request succeeds; otherwise, returns an error.

", + "locationName":"return" + } + }, + "documentation":"

Contains the output of UpdateSecurityGroupRuleDescriptionsEgress.

" + }, + "UpdateSecurityGroupRuleDescriptionsIngressRequest":{ + "type":"structure", + "required":["IpPermissions"], + "members":{ + "DryRun":{ + "shape":"Boolean", + "documentation":"

Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation. Otherwise, it is UnauthorizedOperation.

" + }, + "GroupId":{ + "shape":"String", + "documentation":"

The ID of the security group. You must specify either the security group ID or the security group name in the request. For security groups in a nondefault VPC, you must specify the security group ID.

" + }, + "GroupName":{ + "shape":"String", + "documentation":"

[EC2-Classic, default VPC] The name of the security group. You must specify either the security group ID or the security group name in the request.

" + }, + "IpPermissions":{ + "shape":"IpPermissionList", + "documentation":"

The IP permissions for the security group rule.

" + } + }, + "documentation":"

Contains the parameters for UpdateSecurityGroupRuleDescriptionsIngress.

" + }, + "UpdateSecurityGroupRuleDescriptionsIngressResult":{ + "type":"structure", + "members":{ + "Return":{ + "shape":"Boolean", + "documentation":"

Returns true if the request succeeds; otherwise, returns an error.

", + "locationName":"return" + } + }, + "documentation":"

Contains the output of UpdateSecurityGroupRuleDescriptionsIngress.

" + }, "UserBucket":{ "type":"structure", "members":{ @@ -18295,6 +19669,11 @@ "UserIdGroupPair":{ "type":"structure", "members":{ + "Description":{ + "shape":"String", + "documentation":"

A description for the security group rule that references this user ID group pair.

Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*

", + "locationName":"description" + }, "GroupId":{ "shape":"String", "documentation":"

The ID of the security group.

", @@ -18823,7 +20202,7 @@ "members":{ "CidrBlock":{ "shape":"String", - "documentation":"

The IPv4 CIDR block for the VPC.

", + "documentation":"

The primary IPv4 CIDR block for the VPC.

", "locationName":"cidrBlock" }, "DhcpOptionsId":{ @@ -18851,6 +20230,11 @@ "documentation":"

Information about the IPv6 CIDR blocks associated with the VPC.

", "locationName":"ipv6CidrBlockAssociationSet" }, + "CidrBlockAssociationSet":{ + "shape":"VpcCidrBlockAssociationSet", + "documentation":"

Information about the IPv4 CIDR blocks associated with the VPC.

", + "locationName":"cidrBlockAssociationSet" + }, "IsDefault":{ "shape":"Boolean", "documentation":"

Indicates whether the VPC is the default VPC.

", @@ -18894,6 +20278,34 @@ "enableDnsHostnames" ] }, + "VpcCidrBlockAssociation":{ + "type":"structure", + "members":{ + "AssociationId":{ + "shape":"String", + "documentation":"

The association ID for the IPv4 CIDR block.

", + "locationName":"associationId" + }, + "CidrBlock":{ + "shape":"String", + "documentation":"

The IPv4 CIDR block.

", + "locationName":"cidrBlock" + }, + "CidrBlockState":{ + "shape":"VpcCidrBlockState", + "documentation":"

Information about the state of the CIDR block.

", + "locationName":"cidrBlockState" + } + }, + "documentation":"

Describes an IPv4 CIDR block associated with a VPC.

" + }, + "VpcCidrBlockAssociationSet":{ + "type":"list", + "member":{ + "shape":"VpcCidrBlockAssociation", + "locationName":"item" + } + }, "VpcCidrBlockState":{ "type":"structure", "members":{ @@ -18959,20 +20371,20 @@ "VpcEndpoint":{ "type":"structure", "members":{ - "CreationTimestamp":{ - "shape":"DateTime", - "documentation":"

The date and time the VPC endpoint was created.

", - "locationName":"creationTimestamp" - }, - "PolicyDocument":{ + "VpcEndpointId":{ "shape":"String", - "documentation":"

The policy document associated with the endpoint.

", - "locationName":"policyDocument" + "documentation":"

The ID of the VPC endpoint.

", + "locationName":"vpcEndpointId" }, - "RouteTableIds":{ - "shape":"ValueStringList", - "documentation":"

One or more route tables associated with the endpoint.

", - "locationName":"routeTableIdSet" + "VpcEndpointType":{ + "shape":"VpcEndpointType", + "documentation":"

The type of endpoint.

", + "locationName":"vpcEndpointType" + }, + "VpcId":{ + "shape":"String", + "documentation":"

The ID of the VPC to which the endpoint is associated.

", + "locationName":"vpcId" }, "ServiceName":{ "shape":"String", @@ -18984,15 +20396,45 @@ "documentation":"

The state of the VPC endpoint.

", "locationName":"state" }, - "VpcEndpointId":{ + "PolicyDocument":{ "shape":"String", - "documentation":"

The ID of the VPC endpoint.

", - "locationName":"vpcEndpointId" + "documentation":"

The policy document associated with the endpoint, if applicable.

", + "locationName":"policyDocument" }, - "VpcId":{ - "shape":"String", - "documentation":"

The ID of the VPC to which the endpoint is associated.

", - "locationName":"vpcId" + "RouteTableIds":{ + "shape":"ValueStringList", + "documentation":"

(Gateway endpoint) One or more route tables associated with the endpoint.

", + "locationName":"routeTableIdSet" + }, + "SubnetIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more subnets in which the endpoint is located.

", + "locationName":"subnetIdSet" + }, + "Groups":{ + "shape":"GroupIdentifierSet", + "documentation":"

(Interface endpoint) Information about the security groups associated with the network interface.

", + "locationName":"groupSet" + }, + "PrivateDnsEnabled":{ + "shape":"Boolean", + "documentation":"

(Interface endpoint) Indicates whether the VPC is associated with a private hosted zone.

", + "locationName":"privateDnsEnabled" + }, + "NetworkInterfaceIds":{ + "shape":"ValueStringList", + "documentation":"

(Interface endpoint) One or more network interfaces for the endpoint.

", + "locationName":"networkInterfaceIdSet" + }, + "DnsEntries":{ + "shape":"DnsEntrySet", + "documentation":"

(Interface endpoint) The DNS entries for the endpoint.

", + "locationName":"dnsEntrySet" + }, + "CreationTimestamp":{ + "shape":"DateTime", + "documentation":"

The date and time the VPC endpoint was created.

", + "locationName":"creationTimestamp" } }, "documentation":"

Describes a VPC endpoint.

" @@ -19004,6 +20446,13 @@ "locationName":"item" } }, + "VpcEndpointType":{ + "type":"string", + "enum":[ + "Interface", + "Gateway" + ] + }, "VpcIdStringList":{ "type":"list", "member":{ @@ -19153,6 +20602,11 @@ "documentation":"

The IPv6 CIDR block for the VPC.

", "locationName":"ipv6CidrBlockSet" }, + "CidrBlockSet":{ + "shape":"CidrBlockSet", + "documentation":"

Information about the IPv4 CIDR blocks for the VPC.

", + "locationName":"cidrBlockSet" + }, "OwnerId":{ "shape":"String", "documentation":"

The AWS account ID of the VPC owner.

", @@ -19178,6 +20632,10 @@ "available" ] }, + "VpcTenancy":{ + "type":"string", + "enum":["default"] + }, "VpnConnection":{ "type":"structure", "members":{ @@ -19191,6 +20649,11 @@ "documentation":"

The ID of the customer gateway at your end of the VPN connection.

", "locationName":"customerGatewayId" }, + "Category":{ + "shape":"String", + "documentation":"

The category of the VPN connection. A value of VPN indicates an AWS VPN connection. A value of VPN-Classic indicates an AWS Classic VPN connection. For more information, see AWS Managed VPN Categories in the Amazon Virtual Private Cloud User Guide.

", + "locationName":"category" + }, "State":{ "shape":"VpnState", "documentation":"

The current state of the VPN connection.

", @@ -19264,8 +20727,12 @@ "members":{ "StaticRoutesOnly":{ "shape":"Boolean", - "documentation":"

Indicates whether the VPN connection uses static routes only. Static routes must be used for devices that don't support BGP.

", + "documentation":"

Indicate whether the VPN connection uses static routes only. If you are creating a VPN connection for a device that does not support BGP, you must specify true.

Default: false

", "locationName":"staticRoutesOnly" + }, + "TunnelOptions":{ + "shape":"TunnelOptionsList", + "documentation":"

The tunnel options for the VPN connection.

" } }, "documentation":"

Describes VPN connection options.

" @@ -19298,6 +20765,11 @@ "documentation":"

The ID of the virtual private gateway.

", "locationName":"vpnGatewayId" }, + "AmazonSideAsn":{ + "shape":"Long", + "documentation":"

The private Autonomous System Number (ASN) for the Amazon side of a BGP session.

", + "locationName":"amazonSideAsn" + }, "Tags":{ "shape":"TagList", "documentation":"

Any tags assigned to the virtual private gateway.

", @@ -19361,6 +20833,20 @@ "type":"string", "enum":["Static"] }, + "VpnTunnelOptionsSpecification":{ + "type":"structure", + "members":{ + "TunnelInsideCidr":{ + "shape":"String", + "documentation":"

The range of inside IP addresses for the tunnel. Any specified CIDR blocks must be unique across all VPN connections that use the same virtual private gateway.

Constraints: A size /30 CIDR block from the 169.254.0.0/16 range. The following CIDR blocks are reserved and cannot be used:

" + }, + "PreSharedKey":{ + "shape":"String", + "documentation":"

The pre-shared key (PSK) to establish initial authentication between the virtual private gateway and customer gateway.

Constraints: Allowed characters are alphanumeric characters and ._. Must be between 8 and 64 characters in length and cannot start with zero (0).

" + } + }, + "documentation":"

The tunnel options for a VPN connection.

" + }, "ZoneNameStringList":{ "type":"list", "member":{ @@ -19377,4 +20863,4 @@ } }, "documentation":"Amazon Elastic Compute Cloud

Amazon Elastic Compute Cloud (Amazon EC2) provides resizable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.

" -} +} \ No newline at end of file diff --git a/services/ec2/src/main/resources/codegen-resources/waiters-2.json b/services/ec2/src/main/resources/codegen-resources/waiters-2.json index 7d8b7cdcd36f..33ea7b047164 100644 --- a/services/ec2/src/main/resources/codegen-resources/waiters-2.json +++ b/services/ec2/src/main/resources/codegen-resources/waiters-2.json @@ -390,6 +390,12 @@ "argument": "SpotInstanceRequests[].Status.Code", "expected": "fulfilled" }, + { + "state": "success", + "matcher": "pathAll", + "argument": "SpotInstanceRequests[].Status.Code", + "expected": "request-canceled-and-instance-running" + }, { "state": "failure", "matcher": "pathAny", @@ -413,6 +419,11 @@ "matcher": "pathAny", "argument": "SpotInstanceRequests[].Status.Code", "expected": "system-error" + }, + { + "state": "retry", + "matcher": "error", + "expected": "InvalidSpotInstanceRequestID.NotFound" } ] }, diff --git a/services/ecr/src/main/resources/codegen-resources/service-2.json b/services/ecr/src/main/resources/codegen-resources/service-2.json index 7684cc0085f0..7e2905ff15c6 100644 --- a/services/ecr/src/main/resources/codegen-resources/service-2.json +++ b/services/ecr/src/main/resources/codegen-resources/service-2.json @@ -1,7 +1,6 @@ { "version":"2.0", "metadata":{ - "uid":"ecr-2015-09-21", "apiVersion":"2015-09-21", "endpointPrefix":"ecr", "jsonVersion":"1.1", @@ -9,7 +8,8 @@ "serviceAbbreviation":"Amazon ECR", "serviceFullName":"Amazon EC2 Container Registry", "signatureVersion":"v4", - "targetPrefix":"AmazonEC2ContainerRegistry_V20150921" + "targetPrefix":"AmazonEC2ContainerRegistry_V20150921", + "uid":"ecr-2015-09-21" }, "operations":{ "BatchCheckLayerAvailability":{ @@ -75,7 +75,7 @@ {"shape":"LayerAlreadyExistsException"}, {"shape":"EmptyUploadException"} ], - "documentation":"

Inform Amazon ECR that the image layer upload for a specified registry, repository name, and upload ID, has completed. You can optionally provide a sha256 digest of the image layer for data validation purposes.

This operation is used by the Amazon ECR proxy, and it is not intended for general use by customers for pulling and pushing images. In most cases, you should use the docker CLI to pull, tag, and push images.

" + "documentation":"

Informs Amazon ECR that the image layer upload has completed for a specified registry, repository name, and upload ID. You can optionally provide a sha256 digest of the image layer for data validation purposes.

This operation is used by the Amazon ECR proxy, and it is not intended for general use by customers for pulling and pushing images. In most cases, you should use the docker CLI to pull, tag, and push images.

" }, "CreateRepository":{ "name":"CreateRepository", @@ -93,6 +93,22 @@ ], "documentation":"

Creates an image repository.

" }, + "DeleteLifecyclePolicy":{ + "name":"DeleteLifecyclePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteLifecyclePolicyRequest"}, + "output":{"shape":"DeleteLifecyclePolicyResponse"}, + "errors":[ + {"shape":"ServerException"}, + {"shape":"InvalidParameterException"}, + {"shape":"RepositoryNotFoundException"}, + {"shape":"LifecyclePolicyNotFoundException"} + ], + "documentation":"

Deletes the specified lifecycle policy.

" + }, "DeleteRepository":{ "name":"DeleteRepository", "http":{ @@ -187,6 +203,38 @@ ], "documentation":"

Retrieves the pre-signed Amazon S3 download URL corresponding to an image layer. You can only get URLs for image layers that are referenced in an image.

This operation is used by the Amazon ECR proxy, and it is not intended for general use by customers for pulling and pushing images. In most cases, you should use the docker CLI to pull, tag, and push images.

" }, + "GetLifecyclePolicy":{ + "name":"GetLifecyclePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetLifecyclePolicyRequest"}, + "output":{"shape":"GetLifecyclePolicyResponse"}, + "errors":[ + {"shape":"ServerException"}, + {"shape":"InvalidParameterException"}, + {"shape":"RepositoryNotFoundException"}, + {"shape":"LifecyclePolicyNotFoundException"} + ], + "documentation":"

Retrieves the specified lifecycle policy.

" + }, + "GetLifecyclePolicyPreview":{ + "name":"GetLifecyclePolicyPreview", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetLifecyclePolicyPreviewRequest"}, + "output":{"shape":"GetLifecyclePolicyPreviewResponse"}, + "errors":[ + {"shape":"ServerException"}, + {"shape":"InvalidParameterException"}, + {"shape":"RepositoryNotFoundException"}, + {"shape":"LifecyclePolicyPreviewNotFoundException"} + ], + "documentation":"

Retrieves the results of the specified lifecycle policy preview request.

" + }, "GetRepositoryPolicy":{ "name":"GetRepositoryPolicy", "http":{ @@ -251,6 +299,21 @@ ], "documentation":"

Creates or updates the image manifest and tags associated with an image.

This operation is used by the Amazon ECR proxy, and it is not intended for general use by customers for pulling and pushing images. In most cases, you should use the docker CLI to pull, tag, and push images.

" }, + "PutLifecyclePolicy":{ + "name":"PutLifecyclePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutLifecyclePolicyRequest"}, + "output":{"shape":"PutLifecyclePolicyResponse"}, + "errors":[ + {"shape":"ServerException"}, + {"shape":"InvalidParameterException"}, + {"shape":"RepositoryNotFoundException"} + ], + "documentation":"

Creates or updates a lifecycle policy.

" + }, "SetRepositoryPolicy":{ "name":"SetRepositoryPolicy", "http":{ @@ -266,6 +329,23 @@ ], "documentation":"

Applies a repository policy on a specified repository to control access permissions.

" }, + "StartLifecyclePolicyPreview":{ + "name":"StartLifecyclePolicyPreview", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartLifecyclePolicyPreviewRequest"}, + "output":{"shape":"StartLifecyclePolicyPreviewResponse"}, + "errors":[ + {"shape":"ServerException"}, + {"shape":"InvalidParameterException"}, + {"shape":"RepositoryNotFoundException"}, + {"shape":"LifecyclePolicyNotFoundException"}, + {"shape":"LifecyclePolicyPreviewInProgressException"} + ], + "documentation":"

Starts a preview of the specified lifecycle policy. This allows you to see the results before creating the lifecycle policy.

" + }, "UploadLayerPart":{ "name":"UploadLayerPart", "http":{ @@ -498,6 +578,41 @@ } }, "CreationTimestamp":{"type":"timestamp"}, + "DeleteLifecyclePolicyRequest":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The AWS account ID associated with the registry that contains the repository. If you do not specify a registry, the default registry is assumed.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository that is associated with the repository policy to
 delete.

" + } + } + }, + "DeleteLifecyclePolicyResponse":{ + "type":"structure", + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The registry ID associated with the request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The repository name associated with the request.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text.

" + }, + "lastEvaluatedAt":{ + "shape":"EvaluationTimestamp", + "documentation":"

The time stamp of the last time that the lifecycle policy was run.

" + } + } + }, "DeleteRepositoryPolicyRequest":{ "type":"structure", "required":["repositoryName"], @@ -543,7 +658,7 @@ }, "force":{ "shape":"ForceFlag", - "documentation":"

Force the deletion of the repository if it contains images.

" + "documentation":"

If a repository contains images, forces the deletion.

" } } }, @@ -654,6 +769,7 @@ "documentation":"

The specified layer upload does not contain any layer parts.

", "exception":true }, + "EvaluationTimestamp":{"type":"timestamp"}, "ExceptionMessage":{"type":"string"}, "ExpirationTimestamp":{"type":"timestamp"}, "ForceFlag":{"type":"boolean"}, @@ -715,6 +831,104 @@ } } }, + "GetLifecyclePolicyPreviewRequest":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The AWS account ID associated with the registry that contains the repository. If you do not specify a registry, the default registry is assumed.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository with the policy to retrieve.

" + }, + "imageIds":{ + "shape":"ImageIdentifierList", + "documentation":"

The list of imageIDs to be included.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

The nextToken value returned from a previous paginated
 GetLifecyclePolicyPreviewRequest request where maxResults was used and the
 results exceeded the value of that parameter. Pagination continues from the end of the
 previous results that returned the nextToken value. This value is
 null when there are no more results to return.

" + }, + "maxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of repository results returned by GetLifecyclePolicyPreviewRequest in
 paginated output. When this parameter is used, GetLifecyclePolicyPreviewRequest only returns
 maxResults results in a single page along with a nextToken
 response element. The remaining results of the initial request can be seen by sending
 another GetLifecyclePolicyPreviewRequest request with the returned nextToken
 value. This value can be between 1 and 100. If this
 parameter is not used, then GetLifecyclePolicyPreviewRequest returns up to
 100 results and a nextToken value, if
 applicable.

" + }, + "filter":{ + "shape":"LifecyclePolicyPreviewFilter", + "documentation":"

An optional parameter that filters results based on image tag status and all tags, if tagged.

" + } + } + }, + "GetLifecyclePolicyPreviewResponse":{ + "type":"structure", + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The registry ID associated with the request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The repository name associated with the request.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text.

" + }, + "status":{ + "shape":"LifecyclePolicyPreviewStatus", + "documentation":"

The status of the lifecycle policy preview request.

" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"

The nextToken value to include in a future GetLifecyclePolicyPreview request. When the results of a GetLifecyclePolicyPreview request exceed maxResults, this value can be used to retrieve the next page of results. This value is null when there are no more results to return.

" + }, + "previewResults":{ + "shape":"LifecyclePolicyPreviewResultList", + "documentation":"

The results of the lifecycle policy preview request.

" + }, + "summary":{ + "shape":"LifecyclePolicyPreviewSummary", + "documentation":"

The list of images that is returned as a result of the action.

" + } + } + }, + "GetLifecyclePolicyRequest":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The AWS account ID associated with the registry that contains the repository. If you do not specify a registry, the default registry is assumed.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository with the policy to retrieve.

" + } + } + }, + "GetLifecyclePolicyResponse":{ + "type":"structure", + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The registry ID associated with the request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The repository name associated with the request.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text.

" + }, + "lastEvaluatedAt":{ + "shape":"EvaluationTimestamp", + "documentation":"

The time stamp of the last time that the lifecycle policy was run.

" + } + } + }, "GetRepositoryPolicyRequest":{ "type":"structure", "required":["repositoryName"], @@ -725,7 +939,7 @@ }, "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The name of the repository whose policy you want to retrieve.

" + "documentation":"

The name of the repository with the policy to retrieve.

" } } }, @@ -768,6 +982,10 @@ }, "documentation":"

An object representing an Amazon ECR image.

" }, + "ImageActionType":{ + "type":"string", + "enum":["EXPIRE"] + }, "ImageAlreadyExistsException":{ "type":"structure", "members":{ @@ -776,9 +994,13 @@ "documentation":"

The error message associated with the exception.

" } }, - "documentation":"

The specified image has already been pushed, and there are no changes to the manifest or image tag since the last push.

", + "documentation":"

The specified image has already been pushed, and there were no changes to the manifest or image tag after the last push.

", "exception":true }, + "ImageCount":{ + "type":"integer", + "min":0 + }, "ImageDetail":{ "type":"structure", "members":{ @@ -892,11 +1114,11 @@ "members":{ "registryId":{ "shape":"RegistryId", - "documentation":"

The AWS account ID associated with the registry that you intend to upload layers to. If you do not specify a registry, the default registry is assumed.

" + "documentation":"

The AWS account ID associated with the registry to which you intend to upload layers. If you do not specify a registry, the default registry is assumed.

" }, "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The name of the repository that you intend to upload layers to.

" + "documentation":"

The name of the repository to which you intend to upload layers.

" } } }, @@ -1081,6 +1303,108 @@ "documentation":"

The specified layers could not be found, or the specified layer is not valid for this repository.

", "exception":true }, + "LifecyclePolicyNotFoundException":{ + "type":"structure", + "members":{ + "message":{"shape":"ExceptionMessage"} + }, + "documentation":"

The lifecycle policy could not be found, and no policy is set to the repository.

", + "exception":true + }, + "LifecyclePolicyPreviewFilter":{ + "type":"structure", + "members":{ + "tagStatus":{ + "shape":"TagStatus", + "documentation":"

The tag status of the image.

" + } + }, + "documentation":"

The filter for the lifecycle policy preview.

" + }, + "LifecyclePolicyPreviewInProgressException":{ + "type":"structure", + "members":{ + "message":{"shape":"ExceptionMessage"} + }, + "documentation":"

The previous lifecycle policy preview request has not completed. Please try again later.

", + "exception":true + }, + "LifecyclePolicyPreviewNotFoundException":{ + "type":"structure", + "members":{ + "message":{"shape":"ExceptionMessage"} + }, + "documentation":"

There is no dry run for this repository.

", + "exception":true + }, + "LifecyclePolicyPreviewResult":{ + "type":"structure", + "members":{ + "imageTags":{ + "shape":"ImageTagList", + "documentation":"

The list of tags associated with this image.

" + }, + "imageDigest":{ + "shape":"ImageDigest", + "documentation":"

The sha256 digest of the image manifest.

" + }, + "imagePushedAt":{ + "shape":"PushTimestamp", + "documentation":"

The date and time, expressed in standard JavaScript date format, at which the current image was pushed to the repository.

" + }, + "action":{ + "shape":"LifecyclePolicyRuleAction", + "documentation":"

The type of action to be taken.

" + }, + "appliedRulePriority":{ + "shape":"LifecyclePolicyRulePriority", + "documentation":"

The priority of the applied rule.

" + } + }, + "documentation":"

The result of the lifecycle policy preview.

" + }, + "LifecyclePolicyPreviewResultList":{ + "type":"list", + "member":{"shape":"LifecyclePolicyPreviewResult"} + }, + "LifecyclePolicyPreviewStatus":{ + "type":"string", + "enum":[ + "IN_PROGRESS", + "COMPLETE", + "EXPIRED", + "FAILED" + ] + }, + "LifecyclePolicyPreviewSummary":{ + "type":"structure", + "members":{ + "expiringImageTotalCount":{ + "shape":"ImageCount", + "documentation":"

The number of expiring images.

" + } + }, + "documentation":"

The summary of the lifecycle policy preview request.

" + }, + "LifecyclePolicyRuleAction":{ + "type":"structure", + "members":{ + "type":{ + "shape":"ImageActionType", + "documentation":"

The type of action to be taken.

" + } + }, + "documentation":"

The type of action to be taken.

" + }, + "LifecyclePolicyRulePriority":{ + "type":"integer", + "min":1 + }, + "LifecyclePolicyText":{ + "type":"string", + "max":10240, + "min":100 + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -1108,11 +1432,11 @@ "members":{ "registryId":{ "shape":"RegistryId", - "documentation":"

The AWS account ID associated with the registry that contains the repository to list images in. If you do not specify a registry, the default registry is assumed.

" + "documentation":"

The AWS account ID associated with the registry that contains the repository in which to list images. If you do not specify a registry, the default registry is assumed.

" }, "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The repository whose image IDs are to be listed.

" + "documentation":"

The repository with image IDs to be listed.

" }, "nextToken":{ "shape":"NextToken", @@ -1146,10 +1470,7 @@ "max":100, "min":1 }, - "MediaType":{ - "type":"string", - "pattern":"\\w{1,127}\\/[-+.\\w]{1,127}" - }, + "MediaType":{"type":"string"}, "MediaTypeList":{ "type":"list", "member":{"shape":"MediaType"}, @@ -1197,6 +1518,44 @@ } } }, + "PutLifecyclePolicyRequest":{ + "type":"structure", + "required":[ + "repositoryName", + "lifecyclePolicyText" + ], + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The AWS account ID associated with the registry that contains the repository. If you do
 not specify a registry, the default registry is assumed.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository to receive the policy.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text to apply to the repository.

" + } + } + }, + "PutLifecyclePolicyResponse":{ + "type":"structure", + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The registry ID associated with the request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The repository name associated with the request.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text.

" + } + } + }, "RegistryId":{ "type":"string", "pattern":"[0-9]{12}" @@ -1206,7 +1565,7 @@ "members":{ "repositoryArn":{ "shape":"Arn", - "documentation":"

The Amazon Resource Name (ARN) that identifies the repository. The ARN contains the arn:aws:ecr namespace, followed by the region of the repository, the AWS account ID of the repository owner, the repository namespace, and then the repository name. For example, arn:aws:ecr:region:012345678910:repository/test.

" + "documentation":"

The Amazon Resource Name (ARN) that identifies the repository. The ARN contains the arn:aws:ecr namespace, followed by the region of the repository, AWS account ID of the repository owner, repository namespace, and repository name. For example, arn:aws:ecr:region:012345678910:repository/test.

" }, "registryId":{ "shape":"RegistryId", @@ -1218,11 +1577,11 @@ }, "repositoryUri":{ "shape":"Url", - "documentation":"

The URI for the repository. You can use this URI for Docker push and pull operations.

" + "documentation":"

The URI for the repository. You can use this URI for Docker push or pull operations.

" }, "createdAt":{ "shape":"CreationTimestamp", - "documentation":"

The date and time, in JavaScript date/time format, when the repository was created.

" + "documentation":"

The date and time, in JavaScript date format, when the repository was created.

" } }, "documentation":"

An object representing a repository.

" @@ -1346,6 +1705,45 @@ } } }, + "StartLifecyclePolicyPreviewRequest":{ + "type":"structure", + "required":["repositoryName"], + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The AWS account ID associated with the registry that contains the repository. If you do not specify a registry, the default registry is assumed.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The name of the repository to be evaluated.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The policy to be evaluated against. If you do not specify a policy, the current policy for the repository is used.

" + } + } + }, + "StartLifecyclePolicyPreviewResponse":{ + "type":"structure", + "members":{ + "registryId":{ + "shape":"RegistryId", + "documentation":"

The registry ID associated with the request.

" + }, + "repositoryName":{ + "shape":"RepositoryName", + "documentation":"

The repository name associated with the request.

" + }, + "lifecyclePolicyText":{ + "shape":"LifecyclePolicyText", + "documentation":"

The JSON repository policy text.

" + }, + "status":{ + "shape":"LifecyclePolicyPreviewStatus", + "documentation":"

The status of the lifecycle policy preview request.

" + } + } + }, "TagStatus":{ "type":"string", "enum":[ @@ -1369,11 +1767,11 @@ "members":{ "registryId":{ "shape":"RegistryId", - "documentation":"

The AWS account ID associated with the registry that you are uploading layer parts to. If you do not specify a registry, the default registry is assumed.

" + "documentation":"

The AWS account ID associated with the registry to which you are uploading layer parts. If you do not specify a registry, the default registry is assumed.

" }, "repositoryName":{ "shape":"RepositoryName", - "documentation":"

The name of the repository that you are uploading layer parts to.

" + "documentation":"

The name of the repository to which you are uploading layer parts.

" }, "uploadId":{ "shape":"UploadId", @@ -1427,5 +1825,5 @@ }, "Url":{"type":"string"} }, - "documentation":"

Amazon EC2 Container Registry (Amazon ECR) is a managed AWS Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR supports private Docker repositories with resource-based permissions using AWS IAM so that specific users or Amazon EC2 instances can access repositories and images. Developers can use the Docker CLI to author and manage images.

" + "documentation":"

Amazon EC2 Container Registry (Amazon ECR) is a managed Docker registry service. Customers can use the familiar Docker CLI to push, pull, and manage images. Amazon ECR provides a secure, scalable, and reliable registry. Amazon ECR supports private Docker repositories with resource-based permissions using IAM so that specific users or Amazon EC2 instances can access repositories and images. Developers can use the Docker CLI to author and manage images.

" } diff --git a/services/ecs/src/main/resources/codegen-resources/service-2.json b/services/ecs/src/main/resources/codegen-resources/service-2.json index 62ff7ed05ead..51178b89fe19 100644 --- a/services/ecs/src/main/resources/codegen-resources/service-2.json +++ b/services/ecs/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"Amazon ECS", "serviceFullName":"Amazon EC2 Container Service", + "serviceId":"ECS", "signatureVersion":"v4", "targetPrefix":"AmazonEC2ContainerServiceV20141113", "uid":"ecs-2014-11-13" @@ -25,7 +26,7 @@ {"shape":"ClientException"}, {"shape":"InvalidParameterException"} ], - "documentation":"

Creates a new Amazon ECS cluster. By default, your account receives a default cluster when you launch your first container instance. However, you can create your own cluster with a unique name with the CreateCluster action.

" + "documentation":"

Creates a new Amazon ECS cluster. By default, your account receives a default cluster when you launch your first container instance. However, you can create your own cluster with a unique name with the CreateCluster action.

When you call the CreateCluster API operation, Amazon ECS attempts to create the service-linked role for your account so that required resources in other AWS services can be managed on your behalf. However, if the IAM user that makes the call does not have permissions to create the service-linked role, it is not created. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon EC2 Container Service Developer Guide.

" }, "CreateService":{ "name":"CreateService", @@ -496,7 +497,7 @@ {"shape":"ServiceNotFoundException"}, {"shape":"ServiceNotActiveException"} ], - "documentation":"

Modifies the desired count, deployment configuration, or task definition used in a service.

You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.

You can use UpdateService to modify your task definition and deploy a new version of your service.

You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.

When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout, after which SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.

When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic:

When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:

" + "documentation":"

Modifies the desired count, deployment configuration, network configuration, or task definition used in a service.

You can add to or subtract from the number of instantiations of a task definition in a service by specifying the cluster that the service is running in and a new desiredCount parameter.

You can use UpdateService to modify your task definition and deploy a new version of your service.

You can also update the deployment configuration of a service. When a deployment is triggered by updating the task definition of a service, the service scheduler uses the deployment configuration parameters, minimumHealthyPercent and maximumPercent, to determine the deployment strategy.

When UpdateService stops a task during a deployment, the equivalent of docker stop is issued to the containers running in the task. This results in a SIGTERM and a 30-second timeout, after which SIGKILL is sent and the containers are forcibly stopped. If the container handles the SIGTERM gracefully and exits within 30 seconds from receiving it, no SIGKILL is sent.

When the service scheduler launches new tasks, it determines task placement in your cluster with the following logic:

When the service scheduler stops running tasks, it attempts to maintain balance across the Availability Zones in your cluster using the following logic:

" } }, "shapes":{ @@ -511,6 +512,58 @@ "FAILED" ] }, + "Attachment":{ + "type":"structure", + "members":{ + "id":{ + "shape":"String", + "documentation":"

The unique identifier for the attachment.

" + }, + "type":{ + "shape":"String", + "documentation":"

The type of the attachment, such as an ElasticNetworkInterface.

" + }, + "status":{ + "shape":"String", + "documentation":"

The status of the attachment. Valid values are PRECREATED, CREATED, ATTACHING, ATTACHED, DETACHING, DETACHED, and DELETED.

" + }, + "details":{ + "shape":"AttachmentDetails", + "documentation":"

Details of the attachment. For Elastic Network Interfaces, this includes the network interface ID, the MAC address, the subnet ID, and the private IPv4 address.

" + } + }, + "documentation":"

An object representing a container instance or task attachment.

" + }, + "AttachmentDetails":{ + "type":"list", + "member":{"shape":"KeyValuePair"} + }, + "AttachmentStateChange":{ + "type":"structure", + "required":[ + "attachmentArn", + "status" + ], + "members":{ + "attachmentArn":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the attachment.

" + }, + "status":{ + "shape":"String", + "documentation":"

The status of the attachment.

" + } + }, + "documentation":"

An object representing a change in state for a task attachment.

" + }, + "AttachmentStateChanges":{ + "type":"list", + "member":{"shape":"AttachmentStateChange"} + }, + "Attachments":{ + "type":"list", + "member":{"shape":"Attachment"} + }, "Attribute":{ "type":"structure", "required":["name"], @@ -545,6 +598,21 @@ "type":"list", "member":{"shape":"Attribute"} }, + "AwsVpcConfiguration":{ + "type":"structure", + "required":["subnets"], + "members":{ + "subnets":{ + "shape":"StringList", + "documentation":"

The subnets associated with the task or service.

" + }, + "securityGroups":{ + "shape":"StringList", + "documentation":"

The security groups associated with the task or service. If you do not specify a security group, the default security group for the VPC is used.

" + } + }, + "documentation":"

An object representing the subnets and security groups for a task or service.

" + }, "Boolean":{"type":"boolean"}, "BoxedBoolean":{ "type":"boolean", @@ -651,6 +719,10 @@ "networkBindings":{ "shape":"NetworkBindings", "documentation":"

The network bindings associated with the container.

" + }, + "networkInterfaces":{ + "shape":"NetworkInterfaces", + "documentation":"

The network interfaces associated with the container.

" } }, "documentation":"

A Docker container that is part of a task.

" @@ -660,31 +732,31 @@ "members":{ "name":{ "shape":"String", - "documentation":"

The name of a container. If you are linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. This parameter maps to name in the Create a container section of the Docker Remote API and the --name option to docker run.

" + "documentation":"

The name of a container. If you are linking multiple containers together in a task definition, the name of one container can be entered in the links of another container to connect the containers. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed. This parameter maps to name in the Create a container section of the Docker Remote API and the --name option to docker run.

" }, "image":{ "shape":"String", - "documentation":"

The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with repository-url/image:tag . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the Create a container section of the Docker Remote API and the IMAGE parameter of docker run.

" + "documentation":"

The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either repository-url/image:tag or repository-url/image@digest . Up to 255 letters (uppercase and lowercase), numbers, hyphens, underscores, colons, periods, forward slashes, and number signs are allowed. This parameter maps to Image in the Create a container section of the Docker Remote API and the IMAGE parameter of docker run.

" }, "cpu":{ "shape":"Integer", - "documentation":"

The number of cpu units reserved for the container. A container instance has 1,024 cpu units for every CPU core. This parameter specifies the minimum amount of CPU to reserve for a container, and containers share unallocated CPU units with other containers on the instance with the same ratio as their allocated amount. This parameter maps to CpuShares in the Create a container section of the Docker Remote API and the --cpu-shares option to docker run.

You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.

For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that is the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task would be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher CPU usage if the other container was not using it, but if both tasks were 100% active all of the time, they would be limited to 512 CPU units.

The Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2; however, the CPU parameter is not required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:

" + "documentation":"

The number of cpu units reserved for the container. A container instance has 1,024 cpu units for every CPU core. This parameter specifies the minimum amount of CPU to reserve for a container, and containers share unallocated CPU units with other containers on the instance with the same ratio as their allocated amount. This parameter maps to CpuShares in the Create a container section of the Docker Remote API and the --cpu-shares option to docker run.

You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.

For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that is the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task would be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher CPU usage if the other container was not using it, but if both tasks were 100% active all of the time, they would be limited to 512 CPU units.

The Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2; however, the CPU parameter is not required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:

" }, "memory":{ "shape":"BoxedInteger", - "documentation":"

The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory in the Create a container section of the Docker Remote API and the --memory option to docker run.

You must specify a non-zero integer for one or both of memory or memoryReservation in container definitions. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance on which the container is placed; otherwise, the value of memory is used.

The Docker daemon reserves a minimum of 4 MiB of memory for a container, so you should not specify fewer than 4 MiB of memory for your containers.

" + "documentation":"

The hard limit (in MiB) of memory to present to the container. If your container attempts to exceed the memory specified here, the container is killed. This parameter maps to Memory in the Create a container section of the Docker Remote API and the --memory option to docker run.

You must specify a non-zero integer for one or both of memory or memoryReservation in container definitions. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance on which the container is placed; otherwise, the value of memory is used.

The Docker daemon reserves a minimum of 4 MiB of memory for a container, so you should not specify fewer than 4 MiB of memory for your containers.

" }, "memoryReservation":{ "shape":"BoxedInteger", - "documentation":"

The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit; however, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the Create a container section of the Docker Remote API and the --memory-reservation option to docker run.

You must specify a non-zero integer for one or both of memory or memoryReservation in container definitions. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance on which the container is placed; otherwise, the value of memory is used.

For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.

" + "documentation":"

The soft limit (in MiB) of memory to reserve for the container. When system memory is under heavy contention, Docker attempts to keep the container memory to this soft limit; however, your container can consume more memory when it needs to, up to either the hard limit specified with the memory parameter (if applicable), or all of the available memory on the container instance, whichever comes first. This parameter maps to MemoryReservation in the Create a container section of the Docker Remote API and the --memory-reservation option to docker run.

You must specify a non-zero integer for one or both of memory or memoryReservation in container definitions. If you specify both, memory must be greater than memoryReservation. If you specify memoryReservation, then that value is subtracted from the available memory resources for the container instance on which the container is placed; otherwise, the value of memory is used.

For example, if your container normally uses 128 MiB of memory, but occasionally bursts to 256 MiB of memory for short periods of time, you can set a memoryReservation of 128 MiB, and a memory hard limit of 300 MiB. This configuration would allow the container to only reserve 128 MiB of memory from the remaining resources on the container instance, but also allow the container to consume more memory resources when needed.

" }, "links":{ "shape":"StringList", - "documentation":"

The link parameter allows containers to communicate with each other without the need for port mappings, using the name parameter and optionally, an alias for the link. This construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed for each name and alias. For more information on linking Docker containers, see https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/. This parameter maps to Links in the Create a container section of the Docker Remote API and the --link option to docker run.

Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings.

" + "documentation":"

The link parameter allows containers to communicate with each other without the need for port mappings, using the name parameter and optionally, an alias for the link. This construct is analogous to name:alias in Docker links. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed for each name and alias. For more information on linking Docker containers, see https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/. This parameter maps to Links in the Create a container section of the Docker Remote API and the --link option to docker run.

Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings.

" }, "portMappings":{ "shape":"PortMappingList", - "documentation":"

The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. This parameter maps to PortBindings in the Create a container section of the Docker Remote API and the --publish option to docker run. If the network mode of a task definition is set to none, then you cannot specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.

After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the Network Bindings section of a container description of a selected task in the Amazon ECS console, or the networkBindings section DescribeTasks responses.

" + "documentation":"

The list of port mappings for the container. Port mappings allow containers to access ports on the host container instance to send or receive traffic. This parameter maps to PortBindings in the Create a container section of the Docker Remote API and the --publish option to docker run. If the network mode of a task definition is set to none, then you cannot specify port mappings. If the network mode of a task definition is set to host, then host ports must either be undefined or they must match the container port in the port mapping.

After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the Network Bindings section of a container description of a selected task in the Amazon ECS console, or the networkBindings section DescribeTasks responses.

" }, "essential":{ "shape":"BoxedBoolean", @@ -692,75 +764,79 @@ }, "entryPoint":{ "shape":"StringList", - "documentation":"

Early versions of the Amazon ECS container agent do not properly handle entryPoint parameters. If you have problems using entryPoint, update your container agent or enter your commands and arguments as command array items instead.

The entry point that is passed to the container. This parameter maps to Entrypoint in the Create a container section of the Docker Remote API and the --entrypoint option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.

" + "documentation":"

Early versions of the Amazon ECS container agent do not properly handle entryPoint parameters. If you have problems using entryPoint, update your container agent or enter your commands and arguments as command array items instead.

The entry point that is passed to the container. This parameter maps to Entrypoint in the Create a container section of the Docker Remote API and the --entrypoint option to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#entrypoint.

" }, "command":{ "shape":"StringList", - "documentation":"

The command that is passed to the container. This parameter maps to Cmd in the Create a container section of the Docker Remote API and the COMMAND parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd.

" + "documentation":"

The command that is passed to the container. This parameter maps to Cmd in the Create a container section of the Docker Remote API and the COMMAND parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd.

" }, "environment":{ "shape":"EnvironmentVariables", - "documentation":"

The environment variables to pass to a container. This parameter maps to Env in the Create a container section of the Docker Remote API and the --env option to docker run.

We do not recommend using plain text environment variables for sensitive information, such as credential data.

" + "documentation":"

The environment variables to pass to a container. This parameter maps to Env in the Create a container section of the Docker Remote API and the --env option to docker run.

We do not recommend using plain text environment variables for sensitive information, such as credential data.

" }, "mountPoints":{ "shape":"MountPointList", - "documentation":"

The mount points for data volumes in your container. This parameter maps to Volumes in the Create a container section of the Docker Remote API and the --volume option to docker run.

" + "documentation":"

The mount points for data volumes in your container. This parameter maps to Volumes in the Create a container section of the Docker Remote API and the --volume option to docker run.

" }, "volumesFrom":{ "shape":"VolumeFromList", - "documentation":"

Data volumes to mount from another container. This parameter maps to VolumesFrom in the Create a container section of the Docker Remote API and the --volumes-from option to docker run.

" + "documentation":"

Data volumes to mount from another container. This parameter maps to VolumesFrom in the Create a container section of the Docker Remote API and the --volumes-from option to docker run.

" + }, + "linuxParameters":{ + "shape":"LinuxParameters", + "documentation":"

Linux-specific modifications that are applied to the container, such as Linux KernelCapabilities.

" }, "hostname":{ "shape":"String", - "documentation":"

The hostname to use for your container. This parameter maps to Hostname in the Create a container section of the Docker Remote API and the --hostname option to docker run.

" + "documentation":"

The hostname to use for your container. This parameter maps to Hostname in the Create a container section of the Docker Remote API and the --hostname option to docker run.

" }, "user":{ "shape":"String", - "documentation":"

The user name to use inside the container. This parameter maps to User in the Create a container section of the Docker Remote API and the --user option to docker run.

" + "documentation":"

The user name to use inside the container. This parameter maps to User in the Create a container section of the Docker Remote API and the --user option to docker run.

" }, "workingDirectory":{ "shape":"String", - "documentation":"

The working directory in which to run commands inside the container. This parameter maps to WorkingDir in the Create a container section of the Docker Remote API and the --workdir option to docker run.

" + "documentation":"

The working directory in which to run commands inside the container. This parameter maps to WorkingDir in the Create a container section of the Docker Remote API and the --workdir option to docker run.

" }, "disableNetworking":{ "shape":"BoxedBoolean", - "documentation":"

When this parameter is true, networking is disabled within the container. This parameter maps to NetworkDisabled in the Create a container section of the Docker Remote API.

" + "documentation":"

When this parameter is true, networking is disabled within the container. This parameter maps to NetworkDisabled in the Create a container section of the Docker Remote API.

" }, "privileged":{ "shape":"BoxedBoolean", - "documentation":"

When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the Create a container section of the Docker Remote API and the --privileged option to docker run.

" + "documentation":"

When this parameter is true, the container is given elevated privileges on the host container instance (similar to the root user). This parameter maps to Privileged in the Create a container section of the Docker Remote API and the --privileged option to docker run.

" }, "readonlyRootFilesystem":{ "shape":"BoxedBoolean", - "documentation":"

When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the Create a container section of the Docker Remote API and the --read-only option to docker run.

" + "documentation":"

When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the Create a container section of the Docker Remote API and the --read-only option to docker run.

" }, "dnsServers":{ "shape":"StringList", - "documentation":"

A list of DNS servers that are presented to the container. This parameter maps to Dns in the Create a container section of the Docker Remote API and the --dns option to docker run.

" + "documentation":"

A list of DNS servers that are presented to the container. This parameter maps to Dns in the Create a container section of the Docker Remote API and the --dns option to docker run.

" }, "dnsSearchDomains":{ "shape":"StringList", - "documentation":"

A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the Create a container section of the Docker Remote API and the --dns-search option to docker run.

" + "documentation":"

A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch in the Create a container section of the Docker Remote API and the --dns-search option to docker run.

" }, "extraHosts":{ "shape":"HostEntryList", - "documentation":"

A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the Create a container section of the Docker Remote API and the --add-host option to docker run.

" + "documentation":"

A list of hostnames and IP address mappings to append to the /etc/hosts file on the container. This parameter maps to ExtraHosts in the Create a container section of the Docker Remote API and the --add-host option to docker run.

" }, "dockerSecurityOptions":{ "shape":"StringList", - "documentation":"

A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This parameter maps to SecurityOpt in the Create a container section of the Docker Remote API and the --security-opt option to docker run.

The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon EC2 Container Service Developer Guide.

" + "documentation":"

A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This parameter maps to SecurityOpt in the Create a container section of the Docker Remote API and the --security-opt option to docker run.

The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true or ECS_APPARMOR_CAPABLE=true environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon EC2 Container Service Developer Guide.

" }, "dockerLabels":{ "shape":"DockerLabelsMap", - "documentation":"

A key/value map of labels to add to the container. This parameter maps to Labels in the Create a container section of the Docker Remote API and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

" + "documentation":"

A key/value map of labels to add to the container. This parameter maps to Labels in the Create a container section of the Docker Remote API and the --label option to docker run. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

" }, "ulimits":{ "shape":"UlimitList", - "documentation":"

A list of ulimits to set in the container. This parameter maps to Ulimits in the Create a container section of the Docker Remote API and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

" + "documentation":"

A list of ulimits to set in the container. This parameter maps to Ulimits in the Create a container section of the Docker Remote API and the --ulimit option to docker run. Valid naming values are displayed in the Ulimit data type. This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

" }, "logConfiguration":{ "shape":"LogConfiguration", - "documentation":"

The log configuration specification for the container. This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses; however the container may use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.

This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon EC2 Container Service Developer Guide.

" + "documentation":"

The log configuration specification for the container. This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses; however the container may use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.

This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon EC2 Container Service Developer Guide.

" } }, "documentation":"

Container definitions are used in task definitions to describe the different containers that are launched as part of a task.

" @@ -823,6 +899,10 @@ "registeredAt":{ "shape":"Timestamp", "documentation":"

The Unix timestamp for when the container instance was registered.

" + }, + "attachments":{ + "shape":"Attachments", + "documentation":"

The Elastic Network Interfaces associated with the container instance.

" } }, "documentation":"

An EC2 instance that is running the Amazon ECS agent and has been registered with a cluster.

" @@ -872,6 +952,36 @@ "type":"list", "member":{"shape":"ContainerOverride"} }, + "ContainerStateChange":{ + "type":"structure", + "members":{ + "containerName":{ + "shape":"String", + "documentation":"

The name of the container.

" + }, + "exitCode":{ + "shape":"BoxedInteger", + "documentation":"

The exit code for the container, if the state change is a result of the container exiting.

" + }, + "networkBindings":{ + "shape":"NetworkBindings", + "documentation":"

Any network bindings associated with the container.

" + }, + "reason":{ + "shape":"String", + "documentation":"

The reason for the state change.

" + }, + "status":{ + "shape":"String", + "documentation":"

The status of the container.

" + } + }, + "documentation":"

An object representing a change in state for a container.

" + }, + "ContainerStateChanges":{ + "type":"list", + "member":{"shape":"ContainerStateChange"} + }, "Containers":{ "type":"list", "member":{"shape":"Container"} @@ -916,7 +1026,7 @@ }, "loadBalancers":{ "shape":"LoadBalancers", - "documentation":"

A load balancer object representing the load balancer to use with your service. Currently, you are limited to one load balancer or target group per service. After you create a service, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable.

For Elastic Load Balancing Classic load balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.

For Elastic Load Balancing Application load balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.

" + "documentation":"

A load balancer object representing the load balancer to use with your service. Currently, you are limited to one load balancer or target group per service. After you create a service, the load balancer name or target group ARN, container name, and container port specified in the service definition are immutable.

For Classic Load Balancers, this object must contain the load balancer name, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance is registered with the load balancer specified here.

For Application Load Balancers and Network Load Balancers, this object must contain the load balancer target group ARN, the container name (as it appears in a container definition), and the container port to access from the load balancer. When a task from this service is placed on a container instance, the container instance and port combination is registered as a target in the target group specified here.

" }, "desiredCount":{ "shape":"BoxedInteger", @@ -928,7 +1038,7 @@ }, "role":{ "shape":"String", - "documentation":"

The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is required if you are using a load balancer with your service. If you specify the role parameter, you must also specify a load balancer object with the loadBalancers parameter.

If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/ then you would specify /foo/bar as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.

" + "documentation":"

The name or full Amazon Resource Name (ARN) of the IAM role that allows Amazon ECS to make calls to your load balancer on your behalf. This parameter is only permitted if you are using a load balancer with your service and your task definition does not use the awsvpc network mode. If you specify the role parameter, you must also specify a load balancer object with the loadBalancers parameter.

If your account has already created the Amazon ECS service-linked role, that role is used by default for your service unless you specify a role here. The service-linked role is required if your task definition uses the awsvpc network mode, in which case you should not specify a role here. For more information, see Using Service-Linked Roles for Amazon ECS in the Amazon EC2 Container Service Developer Guide.

If your specified role has a path other than /, then you must either specify the full role ARN (this is recommended) or prefix the role name with the path. For example, if a role with the name bar has a path of /foo/ then you would specify /foo/bar as the role name. For more information, see Friendly Names and Paths in the IAM User Guide.

" }, "deploymentConfiguration":{ "shape":"DeploymentConfiguration", @@ -941,6 +1051,10 @@ "placementStrategy":{ "shape":"PlacementStrategies", "documentation":"

The placement strategy objects to use for tasks in your service. You can specify a maximum of 5 strategy rules per service.

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon EC2 Container Service Developer Guide.

" } } }, @@ -1052,6 +1166,10 @@ "updatedAt":{ "shape":"Timestamp", "documentation":"

The Unix timestamp for when the service was last updated.

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The VPC subnet and security group configuration for tasks that receive their own Elastic Network Interface by using the awsvpc networking mode.

" } }, "documentation":"

The details of an Amazon ECS service deployment.

" @@ -1088,7 +1206,7 @@ }, "force":{ "shape":"BoxedBoolean", - "documentation":"

Forces the deregistration of the container instance. If you have tasks running on the container instance when you deregister it with the force option, these tasks remain running until you terminate the instance or the tasks stop through some other means, but they are orphaned (no longer monitored or accounted for by Amazon ECS). If an orphaned task on your container instance is part of an Amazon ECS service, then the service scheduler starts another copy of that task, on a different container instance if possible.

Any containers in orphaned service tasks that are registered with a Classic load balancer or an Application load balancer target group are deregistered, and they will begin connection draining according to the settings on the load balancer or target group.

" + "documentation":"

Forces the deregistration of the container instance. If you have tasks running on the container instance when you deregister it with the force option, these tasks remain running until you terminate the instance or the tasks stop through some other means, but they are orphaned (no longer monitored or accounted for by Amazon ECS). If an orphaned task on your container instance is part of an Amazon ECS service, then the service scheduler starts another copy of that task, on a different container instance if possible.

Any containers in orphaned service tasks that are registered with a Classic Load Balancer or an Application Load Balancer target group are deregistered, and they will begin connection draining according to the settings on the load balancer or target group.

" } } }, @@ -1250,6 +1368,41 @@ "STOPPED" ] }, + "Device":{ + "type":"structure", + "required":["hostPath"], + "members":{ + "hostPath":{ + "shape":"String", + "documentation":"

The path for the device on the host container instance.

" + }, + "containerPath":{ + "shape":"String", + "documentation":"

The path inside the container at which to expose the host device.

" + }, + "permissions":{ + "shape":"DeviceCgroupPermissions", + "documentation":"

The explicit permissions to provide to the container for the device. By default, the container will be able to read, write, and mknod the device.

" + } + }, + "documentation":"

An object representing a container instance host device.

" + }, + "DeviceCgroupPermission":{ + "type":"string", + "enum":[ + "read", + "write", + "mknod" + ] + }, + "DeviceCgroupPermissions":{ + "type":"list", + "member":{"shape":"DeviceCgroupPermission"} + }, + "DevicesList":{ + "type":"list", + "member":{"shape":"Device"} + }, "DiscoverPollEndpointRequest":{ "type":"structure", "members":{ @@ -1344,6 +1497,20 @@ "documentation":"

The specified parameter is invalid. Review the available parameters for the API request.

", "exception":true }, + "KernelCapabilities":{ + "type":"structure", + "members":{ + "add":{ + "shape":"StringList", + "documentation":"

The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to CapAdd in the Create a container section of the Docker Remote API and the --cap-add option to docker run.

Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" | \"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" | \"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" | \"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\" | \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" | \"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" | \"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" | \"WAKE_ALARM\"

" + }, + "drop":{ + "shape":"StringList", + "documentation":"

The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to CapDrop in the Create a container section of the Docker Remote API and the --cap-drop option to docker run.

Valid values: \"ALL\" | \"AUDIT_CONTROL\" | \"AUDIT_WRITE\" | \"BLOCK_SUSPEND\" | \"CHOWN\" | \"DAC_OVERRIDE\" | \"DAC_READ_SEARCH\" | \"FOWNER\" | \"FSETID\" | \"IPC_LOCK\" | \"IPC_OWNER\" | \"KILL\" | \"LEASE\" | \"LINUX_IMMUTABLE\" | \"MAC_ADMIN\" | \"MAC_OVERRIDE\" | \"MKNOD\" | \"NET_ADMIN\" | \"NET_BIND_SERVICE\" | \"NET_BROADCAST\" | \"NET_RAW\" | \"SETFCAP\" | \"SETGID\" | \"SETPCAP\" | \"SETUID\" | \"SYS_ADMIN\" | \"SYS_BOOT\" | \"SYS_CHROOT\" | \"SYS_MODULE\" | \"SYS_NICE\" | \"SYS_PACCT\" | \"SYS_PTRACE\" | \"SYS_RAWIO\" | \"SYS_RESOURCE\" | \"SYS_TIME\" | \"SYS_TTY_CONFIG\" | \"SYSLOG\" | \"WAKE_ALARM\"

" + } + }, + "documentation":"

The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker. For more information on the default capabilities and the non-default available capabilities, see Runtime privilege and Linux capabilities in the Docker run reference. For more detailed information on these Linux capabilities, see the capabilities(7) Linux manual page.

" + }, "KeyValuePair":{ "type":"structure", "members":{ @@ -1358,6 +1525,24 @@ }, "documentation":"

A key and value pair object.

" }, + "LinuxParameters":{ + "type":"structure", + "members":{ + "capabilities":{ + "shape":"KernelCapabilities", + "documentation":"

The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.

" + }, + "devices":{ + "shape":"DevicesList", + "documentation":"

Any host devices to expose to the container. This parameter maps to Devices in the Create a container section of the Docker Remote API and the --device option to docker run.

" + }, + "initProcessEnabled":{ + "shape":"BoxedBoolean", + "documentation":"

Run an init process inside the container that forwards signals and reaps processes. This parameter maps to the --init option to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log into your container instance and run the following command: sudo docker version | grep \"Server API version\"

" + } + }, + "documentation":"

Linux-specific options that are applied to the container, such as Linux KernelCapabilities.

" + }, "ListAttributesRequest":{ "type":"structure", "required":["targetType"], @@ -1478,7 +1663,7 @@ }, "maxResults":{ "shape":"BoxedInteger", - "documentation":"

The maximum number of container instance results returned by ListServices in paginated output. When this parameter is used, ListServices only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another ListServices request with the returned nextToken value. This value can be between 1 and 10. If this parameter is not used, then ListServices returns up to 10 results and a nextToken value if applicable.

" + "documentation":"

The maximum number of service results returned by ListServices in paginated output. When this parameter is used, ListServices only returns maxResults results in a single page along with a nextToken response element. The remaining results of the initial request can be seen by sending another ListServices request with the returned nextToken value. This value can be between 1 and 10. If this parameter is not used, then ListServices returns up to 10 results and a nextToken value if applicable.

" } } }, @@ -1626,7 +1811,7 @@ }, "loadBalancerName":{ "shape":"String", - "documentation":"

The name of a Classic load balancer.

" + "documentation":"

The name of a load balancer.

" }, "containerName":{ "shape":"String", @@ -1731,11 +1916,44 @@ "type":"list", "member":{"shape":"NetworkBinding"} }, + "NetworkConfiguration":{ + "type":"structure", + "members":{ + "awsvpcConfiguration":{ + "shape":"AwsVpcConfiguration", + "documentation":"

The VPC subnets and security groups associated with a task.

" + } + }, + "documentation":"

An object representing the network configuration for a task or service.

" + }, + "NetworkInterface":{ + "type":"structure", + "members":{ + "attachmentId":{ + "shape":"String", + "documentation":"

The attachment ID for the network interface.

" + }, + "privateIpv4Address":{ + "shape":"String", + "documentation":"

The private IPv4 address for the network interface.

" + }, + "ipv6Address":{ + "shape":"String", + "documentation":"

The private IPv6 address for the network interface.

" + } + }, + "documentation":"

An object representing the Elastic Network Interface for tasks that use the awsvpc network mode.

" + }, + "NetworkInterfaces":{ + "type":"list", + "member":{"shape":"NetworkInterface"} + }, "NetworkMode":{ "type":"string", "enum":[ "bridge", "host", + "awsvpc", "none" ] }, @@ -1901,7 +2119,7 @@ }, "networkMode":{ "shape":"NetworkMode", - "documentation":"

The Docker networking mode to use for the containers in the task. The valid values are none, bridge, and host.

The default Docker network mode is bridge. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the task's containers do not have external connectivity. The host network mode offers the highest networking performance for containers because they use the host network stack instead of the virtualized network stack provided by the bridge mode; however, exposed container ports are mapped directly to the corresponding host port, so you cannot take advantage of dynamic host port mappings or run multiple instantiations of the same task on a single container instance if port mappings are used.

For more information, see Network settings in the Docker run reference.

" + "documentation":"

The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host. The default Docker network mode is bridge. If the network mode is set to none, you cannot specify port mappings in your container definitions, and the task's containers do not have external connectivity. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.

With the host and awsvpc network modes, exposed container ports are mapped directly to the corresponding host port (for the host network mode) or the attached ENI port (for the awsvpc network mode), so you cannot take advantage of dynamic host port mappings.

If the network mode is awsvpc, the task is allocated an Elastic Network Interface, and you must specify a NetworkConfiguration when you create a service or run a task with the task definition. For more information, see Task Networking in the Amazon EC2 Container Service Developer Guide.

If the network mode is host, you can not run multiple instantiations of the same task on a single container instance when port mappings are used.

For more information, see Network settings in the Docker run reference.

" }, "containerDefinitions":{ "shape":"ContainerDefinitions", @@ -1999,6 +2217,10 @@ "placementStrategy":{ "shape":"PlacementStrategies", "documentation":"

The placement strategy objects to use for the task. You can specify a maximum of 5 strategy rules per task.

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The network configuration for the task. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon EC2 Container Service Developer Guide.

" } } }, @@ -2090,6 +2312,10 @@ "placementStrategy":{ "shape":"PlacementStrategies", "documentation":"

The placement strategy that determines how tasks for the service are placed.

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The VPC subnet and security group configuration for tasks that receive their own Elastic Network Interface by using the awsvpc networking mode.

" } }, "documentation":"

Details on a service within a cluster

" @@ -2171,6 +2397,10 @@ "group":{ "shape":"String", "documentation":"

The name of the task group to associate with the task. The default value is the family name of the task definition (for example, family:my-family-name).

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The VPC subnet and security group configuration for tasks that receive their own Elastic Network Interface by using the awsvpc networking mode.

" } } }, @@ -2279,6 +2509,14 @@ "reason":{ "shape":"String", "documentation":"

The reason for the state change request.

" + }, + "containers":{ + "shape":"ContainerStateChanges", + "documentation":"

Any containers associated with the state change request.

" + }, + "attachments":{ + "shape":"AttachmentStateChanges", + "documentation":"

Any attachments associated with the state change request.

" } } }, @@ -2364,6 +2602,10 @@ "group":{ "shape":"String", "documentation":"

The name of the task group associated with the task.

" + }, + "attachments":{ + "shape":"Attachments", + "documentation":"

The Elastic Network Adapter associated with the task if the task uses the awsvpc network mode.

" } }, "documentation":"

Details on a task in a cluster.

" @@ -2389,7 +2631,7 @@ }, "networkMode":{ "shape":"NetworkMode", - "documentation":"

The Docker networking mode to use for the containers in the task. The valid values are none, bridge, and host.

If the network mode is none, the containers do not have external connectivity. The default Docker network mode is bridge. The host network mode offers the highest networking performance for containers because it uses the host network stack instead of the virtualized network stack provided by the bridge mode.

For more information, see Network settings in the Docker run reference.

" + "documentation":"

The Docker networking mode to use for the containers in the task. The valid values are none, bridge, awsvpc, and host.

If the network mode is none, the containers do not have external connectivity. The default Docker network mode is bridge. If the network mode is awsvpc, the task is allocated an Elastic Network Interface. The host and awsvpc network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the bridge mode.

For more information, see Network settings in the Docker run reference.

" }, "revision":{ "shape":"Integer", @@ -2611,6 +2853,10 @@ "deploymentConfiguration":{ "shape":"DeploymentConfiguration", "documentation":"

Optional deployment parameters that control how many tasks run during the deployment and the ordering of stopping and starting tasks.

" + }, + "networkConfiguration":{ + "shape":"NetworkConfiguration", + "documentation":"

The network configuration for the service. This parameter is required for task definitions that use the awsvpc network mode to receive their own Elastic Network Interface, and it is not supported for other network modes. For more information, see Task Networking in the Amazon EC2 Container Service Developer Guide.

Updating a service to add a subnet to a list of existing subnets does not trigger a service deployment. For example, if your network configuration change is to keep the existing subnets and simply add another subnet to the network configuration, this does not trigger a new service deployment.

" } } }, diff --git a/services/efs/src/main/resources/codegen-resources/service-2.json b/services/efs/src/main/resources/codegen-resources/service-2.json index 6b13e1ba7b4c..23ea720458ad 100644 --- a/services/efs/src/main/resources/codegen-resources/service-2.json +++ b/services/efs/src/main/resources/codegen-resources/service-2.json @@ -223,6 +223,14 @@ "PerformanceMode":{ "shape":"PerformanceMode", "documentation":"

The PerformanceMode of the file system. We recommend generalPurpose performance mode for most file systems. File systems using the maxIO performance mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. This can't be changed after the file system has been created.

" + }, + "Encrypted":{ + "shape":"Encrypted", + "documentation":"

A boolean value that, if true, creates an encrypted file system. When creating an encrypted file system, you have the option of specifying a CreateFileSystemRequest$KmsKeyId for an existing AWS Key Management Service (AWS KMS) customer master key (CMK). If you don't specify a CMK, then the default CMK for Amazon EFS, /aws/elasticfilesystem, is used to protect the encrypted file system.

" + }, + "KmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"

The id of the AWS KMS CMK that will be used to protect the encrypted file system. This parameter is only required if you want to use a non-default CMK. If this parameter is not specified, the default CMK for Amazon EFS is used. This id can be in one of the following formats:

Note that if the KmsKeyId is specified, the CreateFileSystemRequest$Encrypted parameter must be set to true.

" } } }, @@ -496,6 +504,7 @@ }, "documentation":"

" }, + "Encrypted":{"type":"boolean"}, "ErrorCode":{ "type":"string", "min":1 @@ -564,6 +573,14 @@ "PerformanceMode":{ "shape":"PerformanceMode", "documentation":"

The PerformanceMode of the file system.

" + }, + "Encrypted":{ + "shape":"Encrypted", + "documentation":"

A boolean value that, if true, indicates that the file system is encrypted.

" + }, + "KmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"

The id of an AWS Key Management Service (AWS KMS) customer master key (CMK) that was used to protect the encrypted file system.

" } }, "documentation":"

Description of the file system.

" @@ -670,6 +687,11 @@ "error":{"httpStatusCode":409}, "exception":true }, + "KmsKeyId":{ + "type":"string", + "max":2048, + "min":1 + }, "LifeCycleState":{ "type":"string", "enum":[ @@ -885,7 +907,7 @@ "ErrorCode":{"shape":"ErrorCode"}, "Message":{"shape":"ErrorMessage"} }, - "documentation":"

", + "documentation":"

", "error":{"httpStatusCode":400}, "exception":true } diff --git a/services/elasticache/src/main/resources/codegen-resources/examples-1.json b/services/elasticache/src/main/resources/codegen-resources/examples-1.json index 81eb013fb37c..f1d21bd7ff6f 100644 --- a/services/elasticache/src/main/resources/codegen-resources/examples-1.json +++ b/services/elasticache/src/main/resources/codegen-resources/examples-1.json @@ -448,31 +448,33 @@ "SnapshotName": "snapshot-2" }, "output": { - "AutoMinorVersionUpgrade": true, - "CacheClusterCreateTime": "2017-02-03T15:43:36.278Z", - "CacheClusterId": "threenoderedis-001", - "CacheNodeType": "cache.m3.medium", - "CacheParameterGroupName": "default.redis3.2", - "CacheSubnetGroupName": "default", - "Engine": "redis", - "EngineVersion": "3.2.4", - "NodeSnapshots": [ - { - "CacheNodeCreateTime": "2017-02-03T15:43:36.278Z", - "CacheNodeId": "0001", - "CacheSize": "" - } - ], - "NumCacheNodes": 1, - "Port": 6379, - "PreferredAvailabilityZone": "us-west-2c", - "PreferredMaintenanceWindow": "sat:08:00-sat:09:00", - "SnapshotName": "snapshot-2", - "SnapshotRetentionLimit": 1, - "SnapshotSource": "manual", - "SnapshotStatus": "creating", - "SnapshotWindow": "00:00-01:00", - "VpcId": "vpc-73c3cd17" + "Snapshot": { + "AutoMinorVersionUpgrade": true, + "CacheClusterCreateTime": "2017-02-03T15:43:36.278Z", + "CacheClusterId": "threenoderedis-001", + "CacheNodeType": "cache.m3.medium", + "CacheParameterGroupName": "default.redis3.2", + "CacheSubnetGroupName": "default", + "Engine": "redis", + "EngineVersion": "3.2.4", + "NodeSnapshots": [ + { + "CacheNodeCreateTime": "2017-02-03T15:43:36.278Z", + "CacheNodeId": "0001", + "CacheSize": "" + } + ], + "NumCacheNodes": 1, + "Port": 6379, + "PreferredAvailabilityZone": "us-west-2c", + "PreferredMaintenanceWindow": "sat:08:00-sat:09:00", + "SnapshotName": "snapshot-2", + "SnapshotRetentionLimit": 1, + "SnapshotSource": "manual", + "SnapshotStatus": "creating", + "SnapshotWindow": "00:00-01:00", + "VpcId": "vpc-73c3cd17" + } }, "comments": { "input": { @@ -490,34 +492,36 @@ "SnapshotName": "snapshot-2x5" }, "output": { - "AutoMinorVersionUpgrade": true, - "AutomaticFailover": "enabled", - "CacheNodeType": "cache.m3.medium", - "CacheParameterGroupName": "default.redis3.2.cluster.on", - "CacheSubnetGroupName": "default", - "Engine": "redis", - "EngineVersion": "3.2.4", - "NodeSnapshots": [ - { - "CacheSize": "", - "NodeGroupId": "0001" - }, - { - "CacheSize": "", - "NodeGroupId": "0002" - } - ], - "NumNodeGroups": 2, - "Port": 6379, - "PreferredMaintenanceWindow": "mon:09:30-mon:10:30", - "ReplicationGroupDescription": "Redis cluster with 2 shards.", - "ReplicationGroupId": "clusteredredis", - "SnapshotName": "snapshot-2x5", - "SnapshotRetentionLimit": 1, - "SnapshotSource": "manual", - "SnapshotStatus": "creating", - "SnapshotWindow": "12:00-13:00", - "VpcId": "vpc-73c3cd17" + "Snapshot": { + "AutoMinorVersionUpgrade": true, + "AutomaticFailover": "enabled", + "CacheNodeType": "cache.m3.medium", + "CacheParameterGroupName": "default.redis3.2.cluster.on", + "CacheSubnetGroupName": "default", + "Engine": "redis", + "EngineVersion": "3.2.4", + "NodeSnapshots": [ + { + "CacheSize": "", + "NodeGroupId": "0001" + }, + { + "CacheSize": "", + "NodeGroupId": "0002" + } + ], + "NumNodeGroups": 2, + "Port": 6379, + "PreferredMaintenanceWindow": "mon:09:30-mon:10:30", + "ReplicationGroupDescription": "Redis cluster with 2 shards.", + "ReplicationGroupId": "clusteredredis", + "SnapshotName": "snapshot-2x5", + "SnapshotRetentionLimit": 1, + "SnapshotSource": "manual", + "SnapshotStatus": "creating", + "SnapshotWindow": "12:00-13:00", + "VpcId": "vpc-73c3cd17" + } }, "comments": { "input": { diff --git a/services/elasticache/src/main/resources/codegen-resources/service-2.json b/services/elasticache/src/main/resources/codegen-resources/service-2.json index bbcccc56f8a9..92c48b2882f0 100644 --- a/services/elasticache/src/main/resources/codegen-resources/service-2.json +++ b/services/elasticache/src/main/resources/codegen-resources/service-2.json @@ -97,7 +97,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Creates a cache cluster. All nodes in the cache cluster run the same protocol-compliant cache engine software, either Memcached or Redis.

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

" + "documentation":"

Creates a cluster. All nodes in the cluster run the same protocol-compliant cache engine software, either Memcached or Redis.

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

" }, "CreateCacheParameterGroup":{ "name":"CreateCacheParameterGroup", @@ -117,7 +117,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Creates a new Amazon ElastiCache cache parameter group. An ElastiCache cache parameter group is a collection of parameters and their values that are applied to all of the nodes in any cache cluster or replication group using the CacheParameterGroup.

A newly created CacheParameterGroup is an exact duplicate of the default parameter group for the CacheParameterGroupFamily. To customize the newly created CacheParameterGroup you can change the values of specific parameters. For more information, see:

" + "documentation":"

Creates a new Amazon ElastiCache cache parameter group. An ElastiCache cache parameter group is a collection of parameters and their values that are applied to all of the nodes in any cluster or replication group using the CacheParameterGroup.

A newly created CacheParameterGroup is an exact duplicate of the default parameter group for the CacheParameterGroupFamily. To customize the newly created CacheParameterGroup you can change the values of specific parameters. For more information, see:

" }, "CreateCacheSecurityGroup":{ "name":"CreateCacheSecurityGroup", @@ -136,7 +136,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Creates a new cache security group. Use a cache security group to control access to one or more cache clusters.

Cache security groups are only used when you are creating a cache cluster outside of an Amazon Virtual Private Cloud (Amazon VPC). If you are creating a cache cluster inside of a VPC, use a cache subnet group instead. For more information, see CreateCacheSubnetGroup.

" + "documentation":"

Creates a new cache security group. Use a cache security group to control access to one or more clusters.

Cache security groups are only used when you are creating a cluster outside of an Amazon Virtual Private Cloud (Amazon VPC). If you are creating a cluster inside of a VPC, use a cache subnet group instead. For more information, see CreateCacheSubnetGroup.

" }, "CreateCacheSubnetGroup":{ "name":"CreateCacheSubnetGroup", @@ -185,7 +185,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Creates a Redis (cluster mode disabled) or a Redis (cluster mode enabled) replication group.

A Redis (cluster mode disabled) replication group is a collection of cache clusters, where one of the cache clusters is a read/write primary and the others are read-only replicas. Writes to the primary are asynchronously propagated to the replicas.

A Redis (cluster mode enabled) replication group is a collection of 1 to 15 node groups (shards). Each node group (shard) has one read/write primary node and up to 5 read-only replica nodes. Writes to the primary are asynchronously propagated to the replicas. Redis (cluster mode enabled) replication groups partition the data across node groups (shards).

When a Redis (cluster mode disabled) replication group has been successfully created, you can add one or more read replicas to it, up to a total of 5 read replicas. You cannot alter a Redis (cluster mode enabled) replication group after it has been created. However, if you need to increase or decrease the number of node groups (console: shards), you can avail yourself of ElastiCache for Redis' enhanced backup and restore. For more information, see Restoring From a Backup with Cluster Resizing in the ElastiCache User Guide.

This operation is valid for Redis only.

" + "documentation":"

Creates a Redis (cluster mode disabled) or a Redis (cluster mode enabled) replication group.

A Redis (cluster mode disabled) replication group is a collection of clusters, where one of the clusters is a read/write primary and the others are read-only replicas. Writes to the primary are asynchronously propagated to the replicas.

A Redis (cluster mode enabled) replication group is a collection of 1 to 15 node groups (shards). Each node group (shard) has one read/write primary node and up to 5 read-only replica nodes. Writes to the primary are asynchronously propagated to the replicas. Redis (cluster mode enabled) replication groups partition the data across node groups (shards).

When a Redis (cluster mode disabled) replication group has been successfully created, you can add one or more read replicas to it, up to a total of 5 read replicas. You cannot alter a Redis (cluster mode enabled) replication group after it has been created. However, if you need to increase or decrease the number of node groups (console: shards), you can avail yourself of ElastiCache for Redis' enhanced backup and restore. For more information, see Restoring From a Backup with Cluster Resizing in the ElastiCache User Guide.

This operation is valid for Redis only.

" }, "CreateSnapshot":{ "name":"CreateSnapshot", @@ -209,7 +209,7 @@ {"shape":"InvalidParameterCombinationException"}, {"shape":"InvalidParameterValueException"} ], - "documentation":"

Creates a copy of an entire cache cluster or replication group at a specific moment in time.

This operation is valid for Redis only.

" + "documentation":"

Creates a copy of an entire cluster or replication group at a specific moment in time.

This operation is valid for Redis only.

" }, "DeleteCacheCluster":{ "name":"DeleteCacheCluster", @@ -231,7 +231,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Deletes a previously provisioned cache cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cache cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cache cluster; you cannot cancel or revert this operation.

This operation cannot be used to delete a cache cluster that is the last read replica of a replication group or node group (shard) that has Multi-AZ mode enabled or a cache cluster from a Redis (cluster mode enabled) replication group.

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

" + "documentation":"

Deletes a previously provisioned cluster. DeleteCacheCluster deletes all associated cache nodes, node endpoints and the cluster itself. When you receive a successful response from this operation, Amazon ElastiCache immediately begins deleting the cluster; you cannot cancel or revert this operation.

This operation cannot be used to delete a cluster that is the last read replica of a replication group or node group (shard) that has Multi-AZ mode enabled or a cluster from a Redis (cluster mode enabled) replication group.

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

" }, "DeleteCacheParameterGroup":{ "name":"DeleteCacheParameterGroup", @@ -261,7 +261,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Deletes a cache security group.

You cannot delete a cache security group if it is associated with any cache clusters.

" + "documentation":"

Deletes a cache security group.

You cannot delete a cache security group if it is associated with any clusters.

" }, "DeleteCacheSubnetGroup":{ "name":"DeleteCacheSubnetGroup", @@ -274,7 +274,7 @@ {"shape":"CacheSubnetGroupInUse"}, {"shape":"CacheSubnetGroupNotFoundFault"} ], - "documentation":"

Deletes a cache subnet group.

You cannot delete a cache subnet group if it is associated with any cache clusters.

" + "documentation":"

Deletes a cache subnet group.

You cannot delete a cache subnet group if it is associated with any clusters.

" }, "DeleteReplicationGroup":{ "name":"DeleteReplicationGroup", @@ -333,7 +333,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Returns information about all provisioned cache clusters if no cache cluster identifier is specified, or about a specific cache cluster if a cache cluster identifier is supplied.

By default, abbreviated information about the cache clusters is returned. You can use the optional ShowCacheNodeInfo flag to retrieve detailed information about the cache nodes associated with the cache clusters. These details include the DNS address and port for the cache node endpoint.

If the cluster is in the creating state, only cluster-level information is displayed until all of the nodes are successfully provisioned.

If the cluster is in the deleting state, only cluster-level information is displayed.

If cache nodes are currently being added to the cache cluster, node endpoint information and creation time for the additional nodes are not displayed until they are completely provisioned. When the cache cluster state is available, the cluster is ready for use.

If cache nodes are currently being removed from the cache cluster, no endpoint information for the removed nodes is displayed.

" + "documentation":"

Returns information about all provisioned clusters if no cluster identifier is specified, or about a specific cache cluster if a cluster identifier is supplied.

By default, abbreviated information about the clusters is returned. You can use the optional ShowCacheNodeInfo flag to retrieve detailed information about the cache nodes associated with the clusters. These details include the DNS address and port for the cache node endpoint.

If the cluster is in the creating state, only cluster-level information is displayed until all of the nodes are successfully provisioned.

If the cluster is in the deleting state, only cluster-level information is displayed.

If cache nodes are currently being added to the cluster, node endpoint information and creation time for the additional nodes are not displayed until they are completely provisioned. When the cluster state is available, the cluster is ready for use.

If cache nodes are currently being removed from the cluster, no endpoint information for the removed nodes is displayed.

" }, "DescribeCacheEngineVersions":{ "name":"DescribeCacheEngineVersions", @@ -450,7 +450,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Returns events related to cache clusters, cache security groups, and cache parameter groups. You can obtain events specific to a particular cache cluster, cache security group, or cache parameter group by providing the name as a parameter.

By default, only the events occurring within the last hour are returned; however, you can retrieve up to 14 days' worth of events if necessary.

" + "documentation":"

Returns events related to clusters, cache security groups, and cache parameter groups. You can obtain events specific to a particular cluster, cache security group, or cache parameter group by providing the name as a parameter.

By default, only the events occurring within the last hour are returned; however, you can retrieve up to 14 days' worth of events if necessary.

" }, "DescribeReplicationGroups":{ "name":"DescribeReplicationGroups", @@ -523,7 +523,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Returns information about cache cluster or replication group snapshots. By default, DescribeSnapshots lists all of your snapshots; it can optionally describe a single snapshot, or just the snapshots associated with a particular cache cluster.

This operation is valid for Redis only.

" + "documentation":"

Returns information about cluster or replication group snapshots. By default, DescribeSnapshots lists all of your snapshots; it can optionally describe a single snapshot, or just the snapshots associated with a particular cache cluster.

This operation is valid for Redis only.

" }, "ListAllowedNodeTypeModifications":{ "name":"ListAllowedNodeTypeModifications", @@ -586,7 +586,7 @@ {"shape":"InvalidParameterValueException"}, {"shape":"InvalidParameterCombinationException"} ], - "documentation":"

Modifies the settings for a cache cluster. You can use this operation to change one or more cluster configuration parameters by specifying the parameters and the new values.

" + "documentation":"

Modifies the settings for a cluster. You can use this operation to change one or more cluster configuration parameters by specifying the parameters and the new values.

" }, "ModifyCacheParameterGroup":{ "name":"ModifyCacheParameterGroup", @@ -654,6 +654,30 @@ ], "documentation":"

Modifies the settings for a replication group.

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

This operation is valid for Redis only.

" }, + "ModifyReplicationGroupShardConfiguration":{ + "name":"ModifyReplicationGroupShardConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ModifyReplicationGroupShardConfigurationMessage"}, + "output":{ + "shape":"ModifyReplicationGroupShardConfigurationResult", + "resultWrapper":"ModifyReplicationGroupShardConfigurationResult" + }, + "errors":[ + {"shape":"ReplicationGroupNotFoundFault"}, + {"shape":"InvalidReplicationGroupStateFault"}, + {"shape":"InvalidCacheClusterStateFault"}, + {"shape":"InvalidVPCNetworkStateFault"}, + {"shape":"InsufficientCacheClusterCapacityFault"}, + {"shape":"NodeGroupsPerReplicationGroupQuotaExceededFault"}, + {"shape":"NodeQuotaForCustomerExceededFault"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"InvalidParameterCombinationException"} + ], + "documentation":"

Performs horizontal scaling on a Redis (cluster mode enabled) cluster with no downtime. Requires Redis engine version 3.2.10 or newer. For information on upgrading your engine to a newer version, see Upgrading Engine Versions in the Amazon ElastiCache User Guide.

For more information on ElastiCache for Redis online horizontal scaling, see ElastiCache for Redis Horizontal Scaling

" + }, "PurchaseReservedCacheNodesOffering":{ "name":"PurchaseReservedCacheNodesOffering", "http":{ @@ -689,7 +713,7 @@ {"shape":"InvalidCacheClusterStateFault"}, {"shape":"CacheClusterNotFoundFault"} ], - "documentation":"

Reboots some, or all, of the cache nodes within a provisioned cache cluster. This operation applies any modified cache parameter groups to the cache cluster. The reboot operation takes place as soon as possible, and results in a momentary outage to the cache cluster. During the reboot, the cache cluster status is set to REBOOTING.

The reboot causes the contents of the cache (for each cache node being rebooted) to be lost.

When the reboot is complete, a cache cluster event is created.

" + "documentation":"

Reboots some, or all, of the cache nodes within a provisioned cluster. This operation applies any modified cache parameter groups to the cluster. The reboot operation takes place as soon as possible, and results in a momentary outage to the cluster. During the reboot, the cluster status is set to REBOOTING.

The reboot causes the contents of the cache (for each cache node being rebooted) to be lost.

When the reboot is complete, a cluster event is created.

Rebooting a cluster is currently supported on Memcached and Redis (cluster mode disabled) clusters. Rebooting is not supported on Redis (cluster mode enabled) clusters.

If you make changes to parameters that require a Redis (cluster mode enabled) cluster reboot for the changes to be applied, see Rebooting a Cluster for an alternate process.

" }, "RemoveTagsFromResource":{ "name":"RemoveTagsFromResource", @@ -802,7 +826,7 @@ "members":{ "ResourceName":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the resource to which the tags are to be added, for example arn:aws:elasticache:us-west-2:0123456789:cluster:myCluster or arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot.

For more information about ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces.

" + "documentation":"

The Amazon Resource Name (ARN) of the resource to which the tags are to be added, for example arn:aws:elasticache:us-west-2:0123456789:cluster:myCluster or arn:aws:elasticache:us-west-2:0123456789:snapshot:mySnapshot. ElastiCache resources are cluster and snapshot.

For more information about ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces.

" }, "Tags":{ "shape":"TagList", @@ -816,10 +840,10 @@ "members":{ "ScaleUpModifications":{ "shape":"NodeTypeList", - "documentation":"

A string list, each element of which specifies a cache node type which you can use to scale your cache cluster or replication group.

When scaling up a Redis cluster or replication group using ModifyCacheCluster or ModifyReplicationGroup, use a value from this list for the CacheNodeType parameter.

" + "documentation":"

A string list, each element of which specifies a cache node type which you can use to scale your cluster or replication group.

When scaling up a Redis cluster or replication group using ModifyCacheCluster or ModifyReplicationGroup, use a value from this list for the CacheNodeType parameter.

" } }, - "documentation":"

Represents the allowed node types you can use to modify your cache cluster or replication group.

" + "documentation":"

Represents the allowed node types you can use to modify your cluster or replication group.

" }, "AuthorizationAlreadyExistsFault":{ "type":"structure", @@ -891,7 +915,7 @@ "documentation":"

The name of the Availability Zone.

" } }, - "documentation":"

Describes an Availability Zone in which the cache cluster is launched.

", + "documentation":"

Describes an Availability Zone in which the cluster is launched.

", "wrapper":true }, "AvailabilityZonesList":{ @@ -909,7 +933,7 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The user-supplied identifier of the cache cluster. This identifier is a unique key that identifies a cache cluster.

" + "documentation":"

The user-supplied identifier of the cluster. This identifier is a unique key that identifies a cluster.

" }, "ConfigurationEndpoint":{ "shape":"Endpoint", @@ -921,50 +945,56 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The name of the compute and memory capacity node type for the cache cluster.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The name of the compute and memory capacity node type for the cluster.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the cache engine (memcached or redis) to be used for this cache cluster.

" + "documentation":"

The name of the cache engine (memcached or redis) to be used for this cluster.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The version of the cache engine that is used in this cache cluster.

" + "documentation":"

The version of the cache engine that is used in this cluster.

" }, "CacheClusterStatus":{ "shape":"String", - "documentation":"

The current state of this cache cluster, one of the following values: available, creating, deleted, deleting, incompatible-network, modifying, rebooting cache cluster nodes, restore-failed, or snapshotting.

" + "documentation":"

The current state of this cluster, one of the following values: available, creating, deleted, deleting, incompatible-network, modifying, rebooting cluster nodes, restore-failed, or snapshotting.

" }, "NumCacheNodes":{ "shape":"IntegerOptional", - "documentation":"

The number of cache nodes in the cache cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" + "documentation":"

The number of cache nodes in the cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" }, "PreferredAvailabilityZone":{ "shape":"String", - "documentation":"

The name of the Availability Zone in which the cache cluster is located or \"Multiple\" if the cache nodes are located in different Availability Zones.

" + "documentation":"

The name of the Availability Zone in which the cluster is located or \"Multiple\" if the cache nodes are located in different Availability Zones.

" }, "CacheClusterCreateTime":{ "shape":"TStamp", - "documentation":"

The date and time when the cache cluster was created.

" + "documentation":"

The date and time when the cluster was created.

" }, "PreferredMaintenanceWindow":{ "shape":"String", "documentation":"

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

Example: sun:23:00-mon:01:30

" }, "PendingModifiedValues":{"shape":"PendingModifiedValues"}, - "NotificationConfiguration":{"shape":"NotificationConfiguration"}, + "NotificationConfiguration":{ + "shape":"NotificationConfiguration", + "documentation":"

Describes a notification topic and its status. Notification topics are used for publishing ElastiCache events to subscribers using Amazon Simple Notification Service (SNS).

" + }, "CacheSecurityGroups":{ "shape":"CacheSecurityGroupMembershipList", "documentation":"

A list of cache security group elements, composed of name and status sub-elements.

" }, - "CacheParameterGroup":{"shape":"CacheParameterGroupStatus"}, + "CacheParameterGroup":{ + "shape":"CacheParameterGroupStatus", + "documentation":"

Status of the cache parameter group.

" + }, "CacheSubnetGroupName":{ "shape":"String", - "documentation":"

The name of the cache subnet group associated with the cache cluster.

" + "documentation":"

The name of the cache subnet group associated with the cluster.

" }, "CacheNodes":{ "shape":"CacheNodeList", - "documentation":"

A list of cache nodes that are members of the cache cluster.

" + "documentation":"

A list of cache nodes that are members of the cluster.

" }, "AutoMinorVersionUpgrade":{ "shape":"Boolean", @@ -972,29 +1002,41 @@ }, "SecurityGroups":{ "shape":"SecurityGroupMembershipList", - "documentation":"

A list of VPC Security Groups associated with the cache cluster.

" + "documentation":"

A list of VPC Security Groups associated with the cluster.

" }, "ReplicationGroupId":{ "shape":"String", - "documentation":"

The replication group to which this cache cluster belongs. If this field is empty, the cache cluster is not associated with any replication group.

" + "documentation":"

The replication group to which this cluster belongs. If this field is empty, the cluster is not associated with any replication group.

" }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which ElastiCache retains automatic cache cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" + "documentation":"

The number of days for which ElastiCache retains automatic cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cache cluster.

Example: 05:00-09:00

" + "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cluster.

Example: 05:00-09:00

" + }, + "AuthTokenEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables using an AuthToken (password) when issuing Redis commands.

Default: false

" + }, + "TransitEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables in-transit encryption when set to true.

You cannot modify the value of TransitEncryptionEnabled after the cluster is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled to true when you create a cluster.

Default: false

" + }, + "AtRestEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables encryption at-rest when set to true.

You cannot modify the value of AtRestEncryptionEnabled after the cluster is created. To enable at-rest encryption on a cluster you must set AtRestEncryptionEnabled to true when you create a cluster.

Default: false

" } }, - "documentation":"

Contains all of the attributes of a specific cache cluster.

", + "documentation":"

Contains all of the attributes of a specific cluster.

", "wrapper":true }, "CacheClusterAlreadyExistsFault":{ "type":"structure", "members":{ }, - "documentation":"

You already have a cache cluster with the given identifier.

", + "documentation":"

You already have a cluster with the given identifier.

", "error":{ "code":"CacheClusterAlreadyExists", "httpStatusCode":400, @@ -1018,7 +1060,7 @@ }, "CacheClusters":{ "shape":"CacheClusterList", - "documentation":"

A list of cache clusters. Each item in the list contains detailed information about one cache cluster.

" + "documentation":"

A list of clusters. Each item in the list contains detailed information about one cluster.

" } }, "documentation":"

Represents the output of a DescribeCacheClusters operation.

" @@ -1027,7 +1069,7 @@ "type":"structure", "members":{ }, - "documentation":"

The requested cache cluster ID does not refer to an existing cache cluster.

", + "documentation":"

The requested cluster ID does not refer to an existing cluster.

", "error":{ "code":"CacheClusterNotFound", "httpStatusCode":404, @@ -1107,14 +1149,14 @@ }, "SourceCacheNodeId":{ "shape":"String", - "documentation":"

The ID of the primary node to which this read replica node is synchronized. If this field is empty, this node is not associated with a primary cache cluster.

" + "documentation":"

The ID of the primary node to which this read replica node is synchronized. If this field is empty, this node is not associated with a primary cluster.

" }, "CustomerAvailabilityZone":{ "shape":"String", "documentation":"

The Availability Zone where this node was created and now resides.

" } }, - "documentation":"

Represents an individual cache node within a cache cluster. Each cache node runs its own instance of the cluster's protocol-compliant caching software - either Memcached or Redis.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

Represents an individual cache node within a cluster. Each cache node runs its own instance of the cluster's protocol-compliant caching software - either Memcached or Redis.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "CacheNodeIdsList":{ "type":"list", @@ -1170,7 +1212,7 @@ "documentation":"

Indicates whether a change to the parameter is applied immediately or requires a reboot for the change to be applied. You can force a reboot or wait until the next maintenance window's reboot. For more information, see Rebooting a Cluster.

" } }, - "documentation":"

A parameter that has a different value for each cache node type it is applied to. For example, in a Redis cache cluster, a cache.m1.large cache node type would have a larger maxmemory value than a cache.m1.small type.

" + "documentation":"

A parameter that has a different value for each cache node type it is applied to. For example, in a Redis cluster, a cache.m1.large cache node type would have a larger maxmemory value than a cache.m1.small type.

" }, "CacheNodeTypeSpecificParametersList":{ "type":"list", @@ -1366,10 +1408,10 @@ }, "Status":{ "shape":"String", - "documentation":"

The membership status in the cache security group. The status changes when a cache security group is modified, or when the cache security groups assigned to a cache cluster are modified.

" + "documentation":"

The membership status in the cache security group. The status changes when a cache security group is modified, or when the cache security groups assigned to a cluster are modified.

" } }, - "documentation":"

Represents a cache cluster's status within a particular cache security group.

" + "documentation":"

Represents a cluster's status within a particular cache security group.

" }, "CacheSecurityGroupMembershipList":{ "type":"list", @@ -1552,7 +1594,7 @@ "type":"structure", "members":{ }, - "documentation":"

The request cannot be processed because it would exceed the allowed number of cache clusters per customer.

", + "documentation":"

The request cannot be processed because it would exceed the allowed number of clusters per customer.

", "error":{ "code":"ClusterQuotaForCustomerExceeded", "httpStatusCode":400, @@ -1598,55 +1640,55 @@ }, "ReplicationGroupId":{ "shape":"String", - "documentation":"

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

The ID of the replication group to which this cache cluster should belong. If this parameter is specified, the cache cluster is added to the specified replication group as a read replica; otherwise, the cache cluster is a standalone primary that is not part of any replication group.

If the specified replication group is Multi-AZ enabled and the Availability Zone is not specified, the cache cluster is created in Availability Zones that provide the best spread of read replicas across Availability Zones.

This parameter is only valid if the Engine parameter is redis.

" + "documentation":"

Due to current limitations on Redis (cluster mode disabled), this operation or parameter is not supported on Redis (cluster mode enabled) replication groups.

The ID of the replication group to which this cluster should belong. If this parameter is specified, the cluster is added to the specified replication group as a read replica; otherwise, the cluster is a standalone primary that is not part of any replication group.

If the specified replication group is Multi-AZ enabled and the Availability Zone is not specified, the cluster is created in Availability Zones that provide the best spread of read replicas across Availability Zones.

This parameter is only valid if the Engine parameter is redis.

" }, "AZMode":{ "shape":"AZMode", - "documentation":"

Specifies whether the nodes in this Memcached cluster are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region.

This parameter is only supported for Memcached cache clusters.

If the AZMode and PreferredAvailabilityZones are not specified, ElastiCache assumes single-az mode.

" + "documentation":"

Specifies whether the nodes in this Memcached cluster are created in a single Availability Zone or created across multiple Availability Zones in the cluster's region.

This parameter is only supported for Memcached clusters.

If the AZMode and PreferredAvailabilityZones are not specified, ElastiCache assumes single-az mode.

" }, "PreferredAvailabilityZone":{ "shape":"String", - "documentation":"

The EC2 Availability Zone in which the cache cluster is created.

All nodes belonging to this Memcached cache cluster are placed in the preferred Availability Zone. If you want to create your nodes across multiple Availability Zones, use PreferredAvailabilityZones.

Default: System chosen Availability Zone.

" + "documentation":"

The EC2 Availability Zone in which the cluster is created.

All nodes belonging to this Memcached cluster are placed in the preferred Availability Zone. If you want to create your nodes across multiple Availability Zones, use PreferredAvailabilityZones.

Default: System chosen Availability Zone.

" }, "PreferredAvailabilityZones":{ "shape":"PreferredAvailabilityZoneList", - "documentation":"

A list of the Availability Zones in which cache nodes are created. The order of the zones in the list is not important.

This option is only supported on Memcached.

If you are creating your cache cluster in an Amazon VPC (recommended) you can only locate nodes in Availability Zones that are associated with the subnets in the selected subnet group.

The number of Availability Zones listed must equal the value of NumCacheNodes.

If you want all the nodes in the same Availability Zone, use PreferredAvailabilityZone instead, or repeat the Availability Zone multiple times in the list.

Default: System chosen Availability Zones.

" + "documentation":"

A list of the Availability Zones in which cache nodes are created. The order of the zones in the list is not important.

This option is only supported on Memcached.

If you are creating your cluster in an Amazon VPC (recommended) you can only locate nodes in Availability Zones that are associated with the subnets in the selected subnet group.

The number of Availability Zones listed must equal the value of NumCacheNodes.

If you want all the nodes in the same Availability Zone, use PreferredAvailabilityZone instead, or repeat the Availability Zone multiple times in the list.

Default: System chosen Availability Zones.

" }, "NumCacheNodes":{ "shape":"IntegerOptional", - "documentation":"

The initial number of cache nodes that the cache cluster has.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

If you need more than 20 nodes for your Memcached cluster, please fill out the ElastiCache Limit Increase Request form at http://aws.amazon.com/contact-us/elasticache-node-limit-request/.

" + "documentation":"

The initial number of cache nodes that the cluster has.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

If you need more than 20 nodes for your Memcached cluster, please fill out the ElastiCache Limit Increase Request form at http://aws.amazon.com/contact-us/elasticache-node-limit-request/.

" }, "CacheNodeType":{ "shape":"String", - "documentation":"

The compute and memory capacity of the nodes in the node group (shard).

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The compute and memory capacity of the nodes in the node group (shard).

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the cache engine to be used for this cache cluster.

Valid values for this parameter are: memcached | redis

" + "documentation":"

The name of the cache engine to be used for this cluster.

Valid values for this parameter are: memcached | redis

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The version number of the cache engine to be used for this cache cluster. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cache cluster or replication group and create it anew with the earlier engine version.

" + "documentation":"

The version number of the cache engine to be used for this cluster. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster or replication group and create it anew with the earlier engine version.

" }, "CacheParameterGroupName":{ "shape":"String", - "documentation":"

The name of the parameter group to associate with this cache cluster. If this argument is omitted, the default parameter group for the specified engine is used. You cannot use any parameter group which has cluster-enabled='yes' when creating a cluster.

" + "documentation":"

The name of the parameter group to associate with this cluster. If this argument is omitted, the default parameter group for the specified engine is used. You cannot use any parameter group which has cluster-enabled='yes' when creating a cluster.

" }, "CacheSubnetGroupName":{ "shape":"String", - "documentation":"

The name of the subnet group to be used for the cache cluster.

Use this parameter only when you are creating a cache cluster in an Amazon Virtual Private Cloud (Amazon VPC).

If you're going to launch your cluster in an Amazon VPC, you need to create a subnet group before you start creating a cluster. For more information, see Subnets and Subnet Groups.

" + "documentation":"

The name of the subnet group to be used for the cluster.

Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (Amazon VPC).

If you're going to launch your cluster in an Amazon VPC, you need to create a subnet group before you start creating a cluster. For more information, see Subnets and Subnet Groups.

" }, "CacheSecurityGroupNames":{ "shape":"CacheSecurityGroupNameList", - "documentation":"

A list of security group names to associate with this cache cluster.

Use this parameter only when you are creating a cache cluster outside of an Amazon Virtual Private Cloud (Amazon VPC).

" + "documentation":"

A list of security group names to associate with this cluster.

Use this parameter only when you are creating a cluster outside of an Amazon Virtual Private Cloud (Amazon VPC).

" }, "SecurityGroupIds":{ "shape":"SecurityGroupIdsList", - "documentation":"

One or more VPC security groups associated with the cache cluster.

Use this parameter only when you are creating a cache cluster in an Amazon Virtual Private Cloud (Amazon VPC).

" + "documentation":"

One or more VPC security groups associated with the cluster.

Use this parameter only when you are creating a cluster in an Amazon Virtual Private Cloud (Amazon VPC).

" }, "Tags":{ "shape":"TagList", - "documentation":"

A list of cost allocation tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value.

" + "documentation":"

A list of cost allocation tags to be added to this resource.

" }, "SnapshotArns":{ "shape":"SnapshotArnsList", @@ -1658,7 +1700,7 @@ }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

Specifies the weekly time range during which maintenance on the cache cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period. Valid values for ddd are:

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

Example: sun:23:00-mon:01:30

" + "documentation":"

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period. Valid values for ddd are:

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

Example: sun:23:00-mon:01:30

" }, "Port":{ "shape":"IntegerOptional", @@ -1666,7 +1708,7 @@ }, "NotificationTopicArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

The Amazon SNS topic owner must be the same as the cache cluster owner.

" + "documentation":"

The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

The Amazon SNS topic owner must be the same as the cluster owner.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", @@ -1674,15 +1716,15 @@ }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot taken today is retained for 5 days before being deleted.

This parameter is only valid if the Engine parameter is redis.

Default: 0 (i.e., automatic backups are disabled for this cache cluster).

" + "documentation":"

The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot taken today is retained for 5 days before being deleted.

This parameter is only valid if the Engine parameter is redis.

Default: 0 (i.e., automatic backups are disabled for this cluster).

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

Note: This parameter is only valid if the Engine parameter is redis.

" + "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

This parameter is only valid if the Engine parameter is redis.

" }, "AuthToken":{ "shape":"String", - "documentation":"

Reserved parameter. The password used to access a password protected server.

Password constraints:

For more information, see AUTH password at Redis.

" + "documentation":"

Reserved parameter. The password used to access a password protected server.

This parameter is valid only if:

Password constraints:

For more information, see AUTH password at http://redis.io/commands/AUTH.

" } }, "documentation":"

Represents the input of a CreateCacheCluster operation.

" @@ -1792,11 +1834,11 @@ }, "PrimaryClusterId":{ "shape":"String", - "documentation":"

The identifier of the cache cluster that serves as the primary for this replication group. This cache cluster must already exist and have a status of available.

This parameter is not required if NumCacheClusters, NumNodeGroups, or ReplicasPerNodeGroup is specified.

" + "documentation":"

The identifier of the cluster that serves as the primary for this replication group. This cluster must already exist and have a status of available.

This parameter is not required if NumCacheClusters, NumNodeGroups, or ReplicasPerNodeGroup is specified.

" }, "AutomaticFailoverEnabled":{ "shape":"BooleanOptional", - "documentation":"

Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails.

If true, Multi-AZ is enabled for this replication group. If false, Multi-AZ is disabled for this replication group.

AutomaticFailoverEnabled must be enabled for Redis (cluster mode enabled) replication groups.

Default: false

ElastiCache Multi-AZ replication groups is not supported on:

" + "documentation":"

Specifies whether a read-only replica is automatically promoted to read/write primary if the existing primary fails.

If true, Multi-AZ is enabled for this replication group. If false, Multi-AZ is disabled for this replication group.

AutomaticFailoverEnabled must be enabled for Redis (cluster mode enabled) replication groups.

Default: false

Amazon ElastiCache for Redis does not support Multi-AZ with automatic failover on:

" }, "NumCacheClusters":{ "shape":"IntegerOptional", @@ -1804,7 +1846,7 @@ }, "PreferredCacheClusterAZs":{ "shape":"AvailabilityZonesList", - "documentation":"

A list of EC2 Availability Zones in which the replication group's cache clusters are created. The order of the Availability Zones in the list is the order in which clusters are allocated. The primary cluster is created in the first AZ in the list.

This parameter is not used if there is more than one node group (shard). You should use NodeGroupConfiguration instead.

If you are creating your replication group in an Amazon VPC (recommended), you can only locate cache clusters in Availability Zones associated with the subnets in the selected subnet group.

The number of Availability Zones listed must equal the value of NumCacheClusters.

Default: system chosen Availability Zones.

" + "documentation":"

A list of EC2 Availability Zones in which the replication group's clusters are created. The order of the Availability Zones in the list is the order in which clusters are allocated. The primary cluster is created in the first AZ in the list.

This parameter is not used if there is more than one node group (shard). You should use NodeGroupConfiguration instead.

If you are creating your replication group in an Amazon VPC (recommended), you can only locate clusters in Availability Zones associated with the subnets in the selected subnet group.

The number of Availability Zones listed must equal the value of NumCacheClusters.

Default: system chosen Availability Zones.

" }, "NumNodeGroups":{ "shape":"IntegerOptional", @@ -1820,15 +1862,15 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The compute and memory capacity of the nodes in the node group (shard).

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The compute and memory capacity of the nodes in the node group (shard).

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the cache engine to be used for the cache clusters in this replication group.

" + "documentation":"

The name of the cache engine to be used for the clusters in this replication group.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The version number of the cache engine to be used for the cache clusters in this replication group. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version) in the ElastiCache User Guide, but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cache cluster or replication group and create it anew with the earlier engine version.

" + "documentation":"

The version number of the cache engine to be used for the clusters in this replication group. To view the supported cache engine versions, use the DescribeCacheEngineVersions operation.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version) in the ElastiCache User Guide, but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster or replication group and create it anew with the earlier engine version.

" }, "CacheParameterGroupName":{ "shape":"String", @@ -1848,19 +1890,19 @@ }, "Tags":{ "shape":"TagList", - "documentation":"

A list of cost allocation tags to be added to this resource. A tag is a key-value pair. A tag key must be accompanied by a tag value.

" + "documentation":"

A list of cost allocation tags to be added to this resource. A tag is a key-value pair. A tag key does not have to be accompanied by a tag value.

" }, "SnapshotArns":{ "shape":"SnapshotArnsList", - "documentation":"

A list of Amazon Resource Names (ARN) that uniquely identify the Redis RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter NumNodeGroups or the number of node groups configured by NodeGroupConfiguration regardless of the number of ARNs specified here.

This parameter is only valid if the Engine parameter is redis.

Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb

" + "documentation":"

A list of Amazon Resource Names (ARN) that uniquely identify the Redis RDB snapshot files stored in Amazon S3. The snapshot files are used to populate the new replication group. The Amazon S3 object name in the ARN cannot contain any commas. The new replication group will have the number of node groups (console: shards) specified by the parameter NumNodeGroups or the number of node groups configured by NodeGroupConfiguration regardless of the number of ARNs specified here.

Example of an Amazon S3 ARN: arn:aws:s3:::my_bucket/snapshot1.rdb

" }, "SnapshotName":{ "shape":"String", - "documentation":"

The name of a snapshot from which to restore data into the new replication group. The snapshot status changes to restoring while the new replication group is being created.

This parameter is only valid if the Engine parameter is redis.

" + "documentation":"

The name of a snapshot from which to restore data into the new replication group. The snapshot status changes to restoring while the new replication group is being created.

" }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

Specifies the weekly time range during which maintenance on the cache cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period. Valid values for ddd are:

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

Example: sun:23:00-mon:01:30

" + "documentation":"

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period. Valid values for ddd are:

Specifies the weekly time range during which maintenance on the cluster is performed. It is specified as a range in the format ddd:hh24:mi-ddd:hh24:mi (24H Clock UTC). The minimum maintenance window is a 60 minute period.

Valid values for ddd are:

Example: sun:23:00-mon:01:30

" }, "Port":{ "shape":"IntegerOptional", @@ -1868,7 +1910,7 @@ }, "NotificationTopicArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

The Amazon SNS topic owner must be the same as the cache cluster owner.

" + "documentation":"

The Amazon Resource Name (ARN) of the Amazon Simple Notification Service (SNS) topic to which notifications are sent.

The Amazon SNS topic owner must be the same as the cluster owner.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", @@ -1876,15 +1918,23 @@ }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

This parameter is only valid if the Engine parameter is redis.

Default: 0 (i.e., automatic backups are disabled for this cache cluster).

" + "documentation":"

The number of days for which ElastiCache retains automatic snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

Default: 0 (i.e., automatic backups are disabled for this cluster).

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

This parameter is only valid if the Engine parameter is redis.

" + "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

" }, "AuthToken":{ "shape":"String", - "documentation":"

Reserved parameter. The password used to access a password protected server.

Password constraints:

For more information, see AUTH password at Redis.

" + "documentation":"

Reserved parameter. The password used to access a password protected server.

This parameter is valid only if:

Password constraints:

For more information, see AUTH password at http://redis.io/commands/AUTH.

" + }, + "TransitEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables in-transit encryption when set to true.

You cannot modify the value of TransitEncryptionEnabled after the cluster is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled to true when you create a cluster.

This parameter is valid only if the Engine parameter is redis, the EngineVersion parameter is 3.2.4 or later, and the cluster is being created in an Amazon VPC.

If you enable in-transit encryption, you must also specify a value for CacheSubnetGroup.

Default: false

" + }, + "AtRestEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables encryption at rest when set to true.

You cannot modify the value of AtRestEncryptionEnabled after the replication group is created. To enable encryption at rest on a replication group you must set AtRestEncryptionEnabled to true when you create the replication group.

This parameter is valid only if the Engine parameter is redis and the cluster is being created in an Amazon VPC.

Default: false

" } }, "documentation":"

Represents the input of a CreateReplicationGroup operation.

" @@ -1905,7 +1955,7 @@ }, "CacheClusterId":{ "shape":"String", - "documentation":"

The identifier of an existing cache cluster. The snapshot is created from this cache cluster.

" + "documentation":"

The identifier of an existing cluster. The snapshot is created from this cluster.

" }, "SnapshotName":{ "shape":"String", @@ -1926,11 +1976,11 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The cache cluster identifier for the cluster to be deleted. This parameter is not case sensitive.

" + "documentation":"

The cluster identifier for the cluster to be deleted. This parameter is not case sensitive.

" }, "FinalSnapshotIdentifier":{ "shape":"String", - "documentation":"

The user-supplied name of a final cache cluster snapshot. This is the unique name that identifies the snapshot. ElastiCache creates the snapshot, and then deletes the cache cluster immediately afterward.

" + "documentation":"

The user-supplied name of a final cluster snapshot. This is the unique name that identifies the snapshot. ElastiCache creates the snapshot, and then deletes the cluster immediately afterward.

" } }, "documentation":"

Represents the input of a DeleteCacheCluster operation.

" @@ -1947,7 +1997,7 @@ "members":{ "CacheParameterGroupName":{ "shape":"String", - "documentation":"

The name of the cache parameter group to delete.

The specified cache security group must not be associated with any cache clusters.

" + "documentation":"

The name of the cache parameter group to delete.

The specified cache security group must not be associated with any clusters.

" } }, "documentation":"

Represents the input of a DeleteCacheParameterGroup operation.

" @@ -2021,7 +2071,7 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The user-supplied cluster identifier. If this parameter is specified, only information about that specific cache cluster is returned. This parameter isn't case sensitive.

" + "documentation":"

The user-supplied cluster identifier. If this parameter is specified, only information about that specific cluster is returned. This parameter isn't case sensitive.

" }, "MaxRecords":{ "shape":"IntegerOptional", @@ -2239,7 +2289,7 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The cache node type filter value. Use this parameter to show only those reservations matching the specified cache node type.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The cache node type filter value. Use this parameter to show only those reservations matching the specified cache node type.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Duration":{ "shape":"String", @@ -2273,7 +2323,7 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The cache node type filter value. Use this parameter to show only the available offerings matching the specified cache node type.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The cache node type filter value. Use this parameter to show only the available offerings matching the specified cache node type.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Duration":{ "shape":"String", @@ -2321,7 +2371,7 @@ }, "CacheClusterId":{ "shape":"String", - "documentation":"

A user-supplied cluster identifier. If this parameter is specified, only snapshots associated with that specific cache cluster are described.

" + "documentation":"

A user-supplied cluster identifier. If this parameter is specified, only snapshots associated with that specific cluster are described.

" }, "SnapshotName":{ "shape":"String", @@ -2414,11 +2464,11 @@ "members":{ "SourceIdentifier":{ "shape":"String", - "documentation":"

The identifier for the source of the event. For example, if the event occurred at the cache cluster level, the identifier would be the name of the cache cluster.

" + "documentation":"

The identifier for the source of the event. For example, if the event occurred at the cluster level, the identifier would be the name of the cluster.

" }, "SourceType":{ "shape":"SourceType", - "documentation":"

Specifies the origin of this event - a cache cluster, a parameter group, a security group, etc.

" + "documentation":"

Specifies the origin of this event - a cluster, a parameter group, a security group, etc.

" }, "Message":{ "shape":"String", @@ -2429,7 +2479,7 @@ "documentation":"

The date and time when the event occurred.

" } }, - "documentation":"

Represents a single occurrence of something interesting within the system. Some examples of events are creating a cache cluster, adding or removing a cache node, or rebooting a node.

" + "documentation":"

Represents a single occurrence of something interesting within the system. Some examples of events are creating a cluster, adding or removing a cache node, or rebooting a node.

" }, "EventList":{ "type":"list", @@ -2482,7 +2532,7 @@ "type":"structure", "members":{ }, - "documentation":"

The requested cache cluster is not in the available state.

", + "documentation":"

The requested cluster is not in the available state.

", "error":{ "code":"InvalidCacheClusterState", "httpStatusCode":400, @@ -2603,7 +2653,7 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The name of the cache cluster you want to scale up to a larger node instanced type. ElastiCache uses the cluster id to identify the current node type of this cluster and from that to create a list of node types you can scale up to.

You must provide a value for either the CacheClusterId or the ReplicationGroupId.

" + "documentation":"

The name of the cluster you want to scale up to a larger node instanced type. ElastiCache uses the cluster id to identify the current node type of this cluster and from that to create a list of node types you can scale up to.

You must provide a value for either the CacheClusterId or the ReplicationGroupId.

" }, "ReplicationGroupId":{ "shape":"String", @@ -2629,19 +2679,19 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The cache cluster identifier. This value is stored as a lowercase string.

" + "documentation":"

The cluster identifier. This value is stored as a lowercase string.

" }, "NumCacheNodes":{ "shape":"IntegerOptional", - "documentation":"

The number of cache nodes that the cache cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled.

If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately).

A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cache cluster.

" + "documentation":"

The number of cache nodes that the cluster should have. If the value for NumCacheNodes is greater than the sum of the number of current cache nodes and the number of cache nodes pending creation (which may be zero), more nodes are added. If the value is less than the number of existing cache nodes, nodes are removed. If the value is equal to the number of current cache nodes, any pending add or remove requests are canceled.

If you are removing cache nodes, you must use the CacheNodeIdsToRemove parameter to provide the IDs of the specific cache nodes to remove.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

Adding or removing Memcached cache nodes can be applied immediately or as a pending operation (see ApplyImmediately).

A pending operation to modify the number of cache nodes in a cluster during its maintenance window, whether by adding or removing nodes in accordance with the scale out architecture, is not queued. The customer's latest request to add or remove nodes to the cluster overrides any previous pending operations to modify the number of cache nodes in the cluster. For example, a request to remove 2 nodes would override a previous pending operation to remove 3 nodes. Similarly, a request to add 2 nodes would override a previous pending operation to remove 3 nodes and vice versa. As Memcached cache nodes may now be provisioned in different Availability Zones with flexible cache node placement, a request to add nodes does not automatically override a previous pending operation to add nodes. The customer can modify the previous pending operation to add more nodes or explicitly cancel the pending request and retry the new request. To cancel pending operations to modify the number of cache nodes in a cluster, use the ModifyCacheCluster request and set NumCacheNodes equal to the number of cache nodes currently in the cluster.

" }, "CacheNodeIdsToRemove":{ "shape":"CacheNodeIdsList", - "documentation":"

A list of cache node IDs to be removed. A node ID is a numeric identifier (0001, 0002, etc.). This parameter is only valid when NumCacheNodes is less than the existing number of cache nodes. The number of cache node IDs supplied in this parameter must match the difference between the existing number of cache nodes in the cluster or pending cache nodes, whichever is greater, and the value of NumCacheNodes in the request.

For example: If you have 3 active cache nodes, 7 pending cache nodes, and the number of cache nodes in this ModifyCacheCluser call is 5, you must list 2 (7 - 5) cache node IDs to remove.

" + "documentation":"

A list of cache node IDs to be removed. A node ID is a numeric identifier (0001, 0002, etc.). This parameter is only valid when NumCacheNodes is less than the existing number of cache nodes. The number of cache node IDs supplied in this parameter must match the difference between the existing number of cache nodes in the cluster or pending cache nodes, whichever is greater, and the value of NumCacheNodes in the request.

For example: If you have 3 active cache nodes, 7 pending cache nodes, and the number of cache nodes in this ModifyCacheCluster call is 5, you must list 2 (7 - 5) cache node IDs to remove.

" }, "AZMode":{ "shape":"AZMode", - "documentation":"

Specifies whether the new nodes in this Memcached cache cluster are all created in a single Availability Zone or created across multiple Availability Zones.

Valid values: single-az | cross-az.

This option is only supported for Memcached cache clusters.

You cannot specify single-az if the Memcached cache cluster already has cache nodes in different Availability Zones. If cross-az is specified, existing Memcached nodes remain in their current Availability Zone.

Only newly created nodes are located in different Availability Zones. For instructions on how to move existing Memcached nodes to different Availability Zones, see the Availability Zone Considerations section of Cache Node Considerations for Memcached.

" + "documentation":"

Specifies whether the new nodes in this Memcached cluster are all created in a single Availability Zone or created across multiple Availability Zones.

Valid values: single-az | cross-az.

This option is only supported for Memcached clusters.

You cannot specify single-az if the Memcached cluster already has cache nodes in different Availability Zones. If cross-az is specified, existing Memcached nodes remain in their current Availability Zone.

Only newly created nodes are located in different Availability Zones. For instructions on how to move existing Memcached nodes to different Availability Zones, see the Availability Zone Considerations section of Cache Node Considerations for Memcached.

" }, "NewAvailabilityZones":{ "shape":"PreferredAvailabilityZoneList", @@ -2649,11 +2699,11 @@ }, "CacheSecurityGroupNames":{ "shape":"CacheSecurityGroupNameList", - "documentation":"

A list of cache security group names to authorize on this cache cluster. This change is asynchronously applied as soon as possible.

You can use this parameter only with clusters that are created outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be \"Default\".

" + "documentation":"

A list of cache security group names to authorize on this cluster. This change is asynchronously applied as soon as possible.

You can use this parameter only with clusters that are created outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be \"Default\".

" }, "SecurityGroupIds":{ "shape":"SecurityGroupIdsList", - "documentation":"

Specifies the VPC Security Groups associated with the cache cluster.

This parameter can be used only with clusters that are created in an Amazon Virtual Private Cloud (Amazon VPC).

" + "documentation":"

Specifies the VPC Security Groups associated with the cluster.

This parameter can be used only with clusters that are created in an Amazon Virtual Private Cloud (Amazon VPC).

" }, "PreferredMaintenanceWindow":{ "shape":"String", @@ -2661,11 +2711,11 @@ }, "NotificationTopicArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) of the Amazon SNS topic to which notifications are sent.

The Amazon SNS topic owner must be same as the cache cluster owner.

" + "documentation":"

The Amazon Resource Name (ARN) of the Amazon SNS topic to which notifications are sent.

The Amazon SNS topic owner must be same as the cluster owner.

" }, "CacheParameterGroupName":{ "shape":"String", - "documentation":"

The name of the cache parameter group to apply to this cache cluster. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request.

" + "documentation":"

The name of the cache parameter group to apply to this cluster. This change is asynchronously applied as soon as possible for parameters when the ApplyImmediately parameter is specified as true for this request.

" }, "NotificationTopicStatus":{ "shape":"String", @@ -2673,11 +2723,11 @@ }, "ApplyImmediately":{ "shape":"Boolean", - "documentation":"

If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cache cluster.

If false, changes to the cache cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first.

If you perform a ModifyCacheCluster before a pending modification is applied, the pending modification is replaced by the newer modification.

Valid values: true | false

Default: false

" + "documentation":"

If true, this parameter causes the modifications in this request and any pending modifications to be applied, asynchronously and as soon as possible, regardless of the PreferredMaintenanceWindow setting for the cluster.

If false, changes to the cluster are applied on the next maintenance reboot, or the next failure reboot, whichever occurs first.

If you perform a ModifyCacheCluster before a pending modification is applied, the pending modification is replaced by the newer modification.

Valid values: true | false

Default: false

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The upgraded version of the cache engine to be run on the cache nodes.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cache cluster and create it anew with the earlier engine version.

" + "documentation":"

The upgraded version of the cache engine to be run on the cache nodes.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing cluster and create it anew with the earlier engine version.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", @@ -2685,15 +2735,15 @@ }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which ElastiCache retains automatic cache cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" + "documentation":"

The number of days for which ElastiCache retains automatic cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cache cluster.

" + "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your cluster.

" }, "CacheNodeType":{ "shape":"String", - "documentation":"

A valid cache node type that you want to scale this cache cluster up to.

" + "documentation":"

A valid cache node type that you want to scale this cluster up to.

" } }, "documentation":"

Represents the input of a ModifyCacheCluster operation.

" @@ -2765,19 +2815,19 @@ }, "SnapshottingClusterId":{ "shape":"String", - "documentation":"

The cache cluster ID that is used as the daily snapshot source for the replication group. This parameter cannot be set for Redis (cluster mode enabled) replication groups.

" + "documentation":"

The cluster ID that is used as the daily snapshot source for the replication group. This parameter cannot be set for Redis (cluster mode enabled) replication groups.

" }, "AutomaticFailoverEnabled":{ "shape":"BooleanOptional", - "documentation":"

Determines whether a read replica is automatically promoted to read/write primary if the existing primary encounters a failure.

Valid values: true | false

ElastiCache Multi-AZ replication groups are not supported on:

" + "documentation":"

Determines whether a read replica is automatically promoted to read/write primary if the existing primary encounters a failure.

Valid values: true | false

Amazon ElastiCache for Redis does not support Multi-AZ with automatic failover on:

" }, "CacheSecurityGroupNames":{ "shape":"CacheSecurityGroupNameList", - "documentation":"

A list of cache security group names to authorize for the clusters in this replication group. This change is asynchronously applied as soon as possible.

This parameter can be used only with replication group containing cache clusters running outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be Default.

" + "documentation":"

A list of cache security group names to authorize for the clusters in this replication group. This change is asynchronously applied as soon as possible.

This parameter can be used only with replication group containing clusters running outside of an Amazon Virtual Private Cloud (Amazon VPC).

Constraints: Must contain no more than 255 alphanumeric characters. Must not be Default.

" }, "SecurityGroupIds":{ "shape":"SecurityGroupIdsList", - "documentation":"

Specifies the VPC Security Groups associated with the cache clusters in the replication group.

This parameter can be used only with replication group containing cache clusters running in an Amazon Virtual Private Cloud (Amazon VPC).

" + "documentation":"

Specifies the VPC Security Groups associated with the clusters in the replication group.

This parameter can be used only with replication group containing clusters running in an Amazon Virtual Private Cloud (Amazon VPC).

" }, "PreferredMaintenanceWindow":{ "shape":"String", @@ -2801,7 +2851,7 @@ }, "EngineVersion":{ "shape":"String", - "documentation":"

The upgraded version of the cache engine to be run on the cache clusters in the replication group.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing replication group and create it anew with the earlier engine version.

" + "documentation":"

The upgraded version of the cache engine to be run on the clusters in the replication group.

Important: You can upgrade to a newer engine version (see Selecting a Cache Engine and Version), but you cannot downgrade to an earlier engine version. If you want to use an earlier engine version, you must delete the existing replication group and create it anew with the earlier engine version.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", @@ -2832,6 +2882,43 @@ "ReplicationGroup":{"shape":"ReplicationGroup"} } }, + "ModifyReplicationGroupShardConfigurationMessage":{ + "type":"structure", + "required":[ + "ReplicationGroupId", + "NodeGroupCount", + "ApplyImmediately" + ], + "members":{ + "ReplicationGroupId":{ + "shape":"String", + "documentation":"

The name of the Redis (cluster mode enabled) cluster (replication group) on which the shards are to be configured.

" + }, + "NodeGroupCount":{ + "shape":"Integer", + "documentation":"

The number of node groups (shards) that results from the modification of the shard configuration.

" + }, + "ApplyImmediately":{ + "shape":"Boolean", + "documentation":"

Indicates that the shard reconfiguration process begins immediately. At present, the only permitted value for this parameter is true.

Value: true

" + }, + "ReshardingConfiguration":{ + "shape":"ReshardingConfigurationList", + "documentation":"

Specifies the preferred availability zones for each node group in the cluster. If the value of NodeGroupCount is greater than the current number of node groups (shards), you can use this parameter to specify the preferred availability zones of the cluster's shards. If you omit this parameter ElastiCache selects availability zones for you.

You can specify this parameter only if the value of NodeGroupCount is greater than the current number of node groups (shards).

" + }, + "NodeGroupsToRemove":{ + "shape":"NodeGroupsToRemoveList", + "documentation":"

If the value of NodeGroupCount is less than the current number of node groups (shards), NodeGroupsToRemove is a required list of node group ids to remove from the cluster.

" + } + }, + "documentation":"

Represents the input for a ModifyReplicationGroupShardConfiguration operation.

" + }, + "ModifyReplicationGroupShardConfigurationResult":{ + "type":"structure", + "members":{ + "ReplicationGroup":{"shape":"ReplicationGroup"} + } + }, "NodeGroup":{ "type":"structure", "members":{ @@ -2878,7 +2965,7 @@ "documentation":"

A list of Availability Zones to be used for the read replicas. The number of Availability Zones in this list must match the value of ReplicaCount or ReplicasPerNodeGroup if not specified.

" } }, - "documentation":"

node group (shard) configuration options. Each node group (shard) configuration has the following: Slots, PrimaryAvailabilityZone, ReplicaAvailabilityZones, ReplicaCount.

" + "documentation":"

Node group (shard) configuration options. Each node group (shard) configuration has the following: Slots, PrimaryAvailabilityZone, ReplicaAvailabilityZones, ReplicaCount.

" }, "NodeGroupConfigurationList":{ "type":"list", @@ -2899,11 +2986,11 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The ID of the cache cluster to which the node belongs.

" + "documentation":"

The ID of the cluster to which the node belongs.

" }, "CacheNodeId":{ "shape":"String", - "documentation":"

The ID of the node within its cache cluster. A node ID is a numeric identifier (0001, 0002, etc.).

" + "documentation":"

The ID of the node within its cluster. A node ID is a numeric identifier (0001, 0002, etc.).

" }, "ReadEndpoint":{"shape":"Endpoint"}, "PreferredAvailabilityZone":{ @@ -2940,7 +3027,7 @@ "type":"structure", "members":{ }, - "documentation":"

The request cannot be processed because it would exceed the maximum of 15 node groups (shards) in a single replication group.

", + "documentation":"

The request cannot be processed because it would exceed the maximum allowed number of node groups (shards) in a single replication group. The default maximum is 15

", "error":{ "code":"NodeGroupsPerReplicationGroupQuotaExceeded", "httpStatusCode":400, @@ -2948,11 +3035,18 @@ }, "exception":true }, + "NodeGroupsToRemoveList":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"NodeGroupToRemove" + } + }, "NodeQuotaForClusterExceededFault":{ "type":"structure", "members":{ }, - "documentation":"

The request cannot be processed because it would exceed the allowed number of cache nodes in a single cache cluster.

", + "documentation":"

The request cannot be processed because it would exceed the allowed number of cache nodes in a single cluster.

", "error":{ "code":"NodeQuotaForClusterExceeded", "httpStatusCode":400, @@ -2977,7 +3071,7 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

A unique identifier for the source cache cluster.

" + "documentation":"

A unique identifier for the source cluster.

" }, "NodeGroupId":{ "shape":"String", @@ -2985,7 +3079,7 @@ }, "CacheNodeId":{ "shape":"String", - "documentation":"

The cache node identifier for the node in the source cache cluster.

" + "documentation":"

The cache node identifier for the node in the source cluster.

" }, "NodeGroupConfiguration":{ "shape":"NodeGroupConfiguration", @@ -2997,14 +3091,14 @@ }, "CacheNodeCreateTime":{ "shape":"TStamp", - "documentation":"

The date and time when the cache node was created in the source cache cluster.

" + "documentation":"

The date and time when the cache node was created in the source cluster.

" }, "SnapshotCreateTime":{ "shape":"TStamp", "documentation":"

The date and time when the source node's metadata and cache data set was obtained for the snapshot.

" } }, - "documentation":"

Represents an individual cache node in a snapshot of a cache cluster.

", + "documentation":"

Represents an individual cache node in a snapshot of a cluster.

", "wrapper":true }, "NodeSnapshotList":{ @@ -3114,22 +3208,22 @@ "members":{ "NumCacheNodes":{ "shape":"IntegerOptional", - "documentation":"

The new number of cache nodes for the cache cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" + "documentation":"

The new number of cache nodes for the cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" }, "CacheNodeIdsToRemove":{ "shape":"CacheNodeIdsList", - "documentation":"

A list of cache node IDs that are being removed (or will be removed) from the cache cluster. A node ID is a numeric identifier (0001, 0002, etc.).

" + "documentation":"

A list of cache node IDs that are being removed (or will be removed) from the cluster. A node ID is a numeric identifier (0001, 0002, etc.).

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The new cache engine version that the cache cluster runs.

" + "documentation":"

The new cache engine version that the cluster runs.

" }, "CacheNodeType":{ "shape":"String", - "documentation":"

The cache node type that this cache cluster or replication group is scaled to.

" + "documentation":"

The cache node type that this cluster or replication group is scaled to.

" } }, - "documentation":"

A group of settings that are applied to the cache cluster in the future, or that are currently being applied.

" + "documentation":"

A group of settings that are applied to the cluster in the future, or that are currently being applied.

" }, "PreferredAvailabilityZoneList":{ "type":"list", @@ -3172,11 +3266,11 @@ "members":{ "CacheClusterId":{ "shape":"String", - "documentation":"

The cache cluster identifier. This parameter is stored as a lowercase string.

" + "documentation":"

The cluster identifier. This parameter is stored as a lowercase string.

" }, "CacheNodeIdsToReboot":{ "shape":"CacheNodeIdsList", - "documentation":"

A list of cache node IDs to reboot. A node ID is a numeric identifier (0001, 0002, etc.). To reboot an entire cache cluster, specify all of the cache node IDs.

" + "documentation":"

A list of cache node IDs to reboot. A node ID is a numeric identifier (0001, 0002, etc.). To reboot an entire cluster, specify all of the cache node IDs.

" } }, "documentation":"

Represents the input of a RebootCacheCluster operation.

" @@ -3236,7 +3330,7 @@ }, "Description":{ "shape":"String", - "documentation":"

The description of the replication group.

" + "documentation":"

The user supplied description of the replication group.

" }, "Status":{ "shape":"String", @@ -3248,31 +3342,31 @@ }, "MemberClusters":{ "shape":"ClusterIdList", - "documentation":"

The names of all the cache clusters that are part of this replication group.

" + "documentation":"

The identifiers of all the nodes that are part of this replication group.

" }, "NodeGroups":{ "shape":"NodeGroupList", - "documentation":"

A single element list with information about the nodes in the replication group.

" + "documentation":"

A list of node groups in this replication group. For Redis (cluster mode disabled) replication groups, this is a single-element list. For Redis (cluster mode enabled) replication groups, the list contains an entry for each node group (shard).

" }, "SnapshottingClusterId":{ "shape":"String", - "documentation":"

The cache cluster ID that is used as the daily snapshot source for the replication group.

" + "documentation":"

The cluster ID that is used as the daily snapshot source for the replication group.

" }, "AutomaticFailover":{ "shape":"AutomaticFailoverStatus", - "documentation":"

Indicates the status of Multi-AZ for this replication group.

ElastiCache Multi-AZ replication groups are not supported on:

" + "documentation":"

Indicates the status of Multi-AZ with automatic failover for this Redis replication group.

Amazon ElastiCache for Redis does not support Multi-AZ with automatic failover on:

" }, "ConfigurationEndpoint":{ "shape":"Endpoint", - "documentation":"

The configuration endpoint for this replicaiton group. Use the configuration endpoint to connect to this replication group.

" + "documentation":"

The configuration endpoint for this replication group. Use the configuration endpoint to connect to this replication group.

" }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which ElastiCache retains automatic cache cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" + "documentation":"

The number of days for which ElastiCache retains automatic cluster snapshots before deleting them. For example, if you set SnapshotRetentionLimit to 5, a snapshot that was taken today is retained for 5 days before being deleted.

If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

Note: This parameter is only valid if the Engine parameter is redis.

" + "documentation":"

The daily time range (in UTC) during which ElastiCache begins taking a daily snapshot of your node group (shard).

Example: 05:00-09:00

If you do not specify this parameter, ElastiCache automatically chooses an appropriate time range.

This parameter is only valid if the Engine parameter is redis.

" }, "ClusterEnabled":{ "shape":"BooleanOptional", @@ -3281,6 +3375,18 @@ "CacheNodeType":{ "shape":"String", "documentation":"

The name of the compute and memory capacity node type for each node in the replication group.

" + }, + "AuthTokenEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables using an AuthToken (password) when issuing Redis commands.

Default: false

" + }, + "TransitEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables in-transit encryption when set to true.

You cannot modify the value of TransitEncryptionEnabled after the cluster is created. To enable in-transit encryption on a cluster you must set TransitEncryptionEnabled to true when you create a cluster.

Default: false

" + }, + "AtRestEncryptionEnabled":{ + "shape":"BooleanOptional", + "documentation":"

A flag that enables encryption at-rest when set to true.

You cannot modify the value of AtRestEncryptionEnabled after the cluster is created. To enable encryption at-rest on a cluster you must set AtRestEncryptionEnabled to true when you create a cluster.

Default: false

" } }, "documentation":"

Contains all of the attributes of a specific Redis replication group.

", @@ -3340,7 +3446,11 @@ }, "AutomaticFailoverStatus":{ "shape":"PendingAutomaticFailoverStatus", - "documentation":"

Indicates the status of Multi-AZ for this Redis replication group.

ElastiCache Multi-AZ replication groups are not supported on:

" + "documentation":"

Indicates the status of Multi-AZ with automatic failover for this Redis replication group.

Amazon ElastiCache for Redis does not support Multi-AZ with automatic failover on:

" + }, + "Resharding":{ + "shape":"ReshardingStatus", + "documentation":"

The status of an online resharding operation.

" } }, "documentation":"

The settings to be applied to the Redis replication group, either immediately or during the next maintenance window.

" @@ -3358,7 +3468,7 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The cache node type for the reserved cache nodes.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The cache node type for the reserved cache nodes.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "StartTime":{ "shape":"TStamp", @@ -3466,7 +3576,7 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The cache node type for the reserved cache node.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The cache node type for the reserved cache node.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Duration":{ "shape":"Integer", @@ -3548,6 +3658,33 @@ }, "documentation":"

Represents the input of a ResetCacheParameterGroup operation.

" }, + "ReshardingConfiguration":{ + "type":"structure", + "members":{ + "PreferredAvailabilityZones":{ + "shape":"AvailabilityZonesList", + "documentation":"

A list of preferred availability zones for the nodes in this cluster.

" + } + }, + "documentation":"

A list of PreferredAvailabilityZones objects that specifies the configuration of a node group in the resharded cluster.

" + }, + "ReshardingConfigurationList":{ + "type":"list", + "member":{ + "shape":"ReshardingConfiguration", + "locationName":"ReshardingConfiguration" + } + }, + "ReshardingStatus":{ + "type":"structure", + "members":{ + "SlotMigration":{ + "shape":"SlotMigration", + "documentation":"

Represents the progress of an online resharding operation.

" + } + }, + "documentation":"

The status of an online resharding operation.

" + }, "RevokeCacheSecurityGroupIngressMessage":{ "type":"structure", "required":[ @@ -3593,7 +3730,7 @@ }, "Status":{ "shape":"String", - "documentation":"

The status of the cache security group membership. The status changes whenever a cache security group is modified, or when the cache security groups assigned to a cache cluster are modified.

" + "documentation":"

The status of the cache security group membership. The status changes whenever a cache security group is modified, or when the cache security groups assigned to a cluster are modified.

" } }, "documentation":"

Represents a single cache security group and its status.

" @@ -3602,6 +3739,16 @@ "type":"list", "member":{"shape":"SecurityGroupMembership"} }, + "SlotMigration":{ + "type":"structure", + "members":{ + "ProgressPercentage":{ + "shape":"Double", + "documentation":"

The percentage of the slot migration that is complete.

" + } + }, + "documentation":"

Represents the progress of an online resharding operation.

" + }, "Snapshot":{ "type":"structure", "members":{ @@ -3619,7 +3766,7 @@ }, "CacheClusterId":{ "shape":"String", - "documentation":"

The user-supplied identifier of the source cache cluster.

" + "documentation":"

The user-supplied identifier of the source cluster.

" }, "SnapshotStatus":{ "shape":"String", @@ -3631,27 +3778,27 @@ }, "CacheNodeType":{ "shape":"String", - "documentation":"

The name of the compute and memory capacity node type for the source cache cluster.

Valid node types are as follows:

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" + "documentation":"

The name of the compute and memory capacity node type for the source cluster.

The following node types are supported by ElastiCache. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.

Notes:

For a complete listing of node types and specifications, see Amazon ElastiCache Product Features and Details and either Cache Node Type-Specific Parameters for Memcached or Cache Node Type-Specific Parameters for Redis.

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the cache engine (memcached or redis) used by the source cache cluster.

" + "documentation":"

The name of the cache engine (memcached or redis) used by the source cluster.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The version of the cache engine version that is used by the source cache cluster.

" + "documentation":"

The version of the cache engine version that is used by the source cluster.

" }, "NumCacheNodes":{ "shape":"IntegerOptional", - "documentation":"

The number of cache nodes in the source cache cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" + "documentation":"

The number of cache nodes in the source cluster.

For clusters running Redis, this value must be 1. For clusters running Memcached, this value must be between 1 and 20.

" }, "PreferredAvailabilityZone":{ "shape":"String", - "documentation":"

The name of the Availability Zone in which the source cache cluster is located.

" + "documentation":"

The name of the Availability Zone in which the source cluster is located.

" }, "CacheClusterCreateTime":{ "shape":"TStamp", - "documentation":"

The date and time when the source cache cluster was created.

" + "documentation":"

The date and time when the source cluster was created.

" }, "PreferredMaintenanceWindow":{ "shape":"String", @@ -3659,23 +3806,23 @@ }, "TopicArn":{ "shape":"String", - "documentation":"

The Amazon Resource Name (ARN) for the topic used by the source cache cluster for publishing notifications.

" + "documentation":"

The Amazon Resource Name (ARN) for the topic used by the source cluster for publishing notifications.

" }, "Port":{ "shape":"IntegerOptional", - "documentation":"

The port number used by each cache nodes in the source cache cluster.

" + "documentation":"

The port number used by each cache nodes in the source cluster.

" }, "CacheParameterGroupName":{ "shape":"String", - "documentation":"

The cache parameter group that is associated with the source cache cluster.

" + "documentation":"

The cache parameter group that is associated with the source cluster.

" }, "CacheSubnetGroupName":{ "shape":"String", - "documentation":"

The name of the cache subnet group associated with the source cache cluster.

" + "documentation":"

The name of the cache subnet group associated with the source cluster.

" }, "VpcId":{ "shape":"String", - "documentation":"

The Amazon Virtual Private Cloud identifier (VPC ID) of the cache subnet group for the source cache cluster.

" + "documentation":"

The Amazon Virtual Private Cloud identifier (VPC ID) of the cache subnet group for the source cluster.

" }, "AutoMinorVersionUpgrade":{ "shape":"Boolean", @@ -3683,11 +3830,11 @@ }, "SnapshotRetentionLimit":{ "shape":"IntegerOptional", - "documentation":"

For an automatic snapshot, the number of days for which ElastiCache retains the snapshot before deleting it.

For manual snapshots, this field reflects the SnapshotRetentionLimit for the source cache cluster when the snapshot was created. This field is otherwise ignored: Manual snapshots do not expire, and can only be deleted using the DeleteSnapshot operation.

Important If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" + "documentation":"

For an automatic snapshot, the number of days for which ElastiCache retains the snapshot before deleting it.

For manual snapshots, this field reflects the SnapshotRetentionLimit for the source cluster when the snapshot was created. This field is otherwise ignored: Manual snapshots do not expire, and can only be deleted using the DeleteSnapshot operation.

Important If the value of SnapshotRetentionLimit is set to zero (0), backups are turned off.

" }, "SnapshotWindow":{ "shape":"String", - "documentation":"

The daily time range during which ElastiCache takes daily snapshots of the source cache cluster.

" + "documentation":"

The daily time range during which ElastiCache takes daily snapshots of the source cluster.

" }, "NumNodeGroups":{ "shape":"IntegerOptional", @@ -3695,14 +3842,14 @@ }, "AutomaticFailover":{ "shape":"AutomaticFailoverStatus", - "documentation":"

Indicates the status of Multi-AZ for the source replication group.

ElastiCache Multi-AZ replication groups are not supported on:

" + "documentation":"

Indicates the status of Multi-AZ with automatic failover for the source Redis replication group.

Amazon ElastiCache for Redis does not support Multi-AZ with automatic failover on:

" }, "NodeSnapshots":{ "shape":"NodeSnapshotList", - "documentation":"

A list of the cache nodes in the source cache cluster.

" + "documentation":"

A list of the cache nodes in the source cluster.

" } }, - "documentation":"

Represents a copy of an entire Redis cache cluster as of the time when the snapshot was taken.

", + "documentation":"

Represents a copy of an entire Redis cluster as of the time when the snapshot was taken.

", "wrapper":true }, "SnapshotAlreadyExistsFault":{ @@ -3728,7 +3875,7 @@ "type":"structure", "members":{ }, - "documentation":"

You attempted one of the following operations:

Neither of these are supported by ElastiCache.

", + "documentation":"

You attempted one of the following operations:

Neither of these are supported by ElastiCache.

", "error":{ "code":"SnapshotFeatureNotSupportedFault", "httpStatusCode":400, @@ -3790,7 +3937,7 @@ "documentation":"

The Availability Zone associated with the subnet.

" } }, - "documentation":"

Represents the subnet associated with a cache cluster. This parameter refers to subnets defined in Amazon Virtual Private Cloud (Amazon VPC) and used with ElastiCache.

" + "documentation":"

Represents the subnet associated with a cluster. This parameter refers to subnets defined in Amazon Virtual Private Cloud (Amazon VPC) and used with ElastiCache.

" }, "SubnetIdentifierList":{ "type":"list", diff --git a/services/elasticbeanstalk/src/main/resources/codegen-resources/service-2.json b/services/elasticbeanstalk/src/main/resources/codegen-resources/service-2.json index f32feb44fa63..7924ea138cc5 100644 --- a/services/elasticbeanstalk/src/main/resources/codegen-resources/service-2.json +++ b/services/elasticbeanstalk/src/main/resources/codegen-resources/service-2.json @@ -455,6 +455,24 @@ ], "documentation":"

Lists the available platforms.

" }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceMessage"}, + "output":{ + "shape":"ResourceTagsDescriptionMessage", + "resultWrapper":"ListTagsForResourceResult" + }, + "errors":[ + {"shape":"InsufficientPrivilegesException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceTypeNotSupportedException"} + ], + "documentation":"

Returns the tags applied to an AWS Elastic Beanstalk resource. The response contains a list of tag key-value pairs.

Currently, Elastic Beanstalk only supports tagging Elastic Beanstalk environments.

" + }, "RebuildEnvironment":{ "name":"RebuildEnvironment", "http":{ @@ -599,6 +617,22 @@ ], "documentation":"

Updates the environment description, deploys a new application version, updates the configuration settings to an entirely new configuration template, or updates select configuration option values in the running environment.

Attempting to update both the release and configuration is not allowed and AWS Elastic Beanstalk returns an InvalidParameterCombination error.

When updating the configuration settings to a new template or individual settings, a draft configuration is created and DescribeConfigurationSettings for this environment returns two setting descriptions with different DeploymentStatus values.

" }, + "UpdateTagsForResource":{ + "name":"UpdateTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTagsForResourceMessage"}, + "errors":[ + {"shape":"InsufficientPrivilegesException"}, + {"shape":"OperationInProgressException"}, + {"shape":"TooManyTagsException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceTypeNotSupportedException"} + ], + "documentation":"

Update the list of tags applied to an AWS Elastic Beanstalk resource. Two lists can be passed: TagsToAdd for tags to add or update, and TagsToRemove.

Currently, Elastic Beanstalk only supports tagging of Elastic Beanstalk environments.

" + }, "ValidateConfigurationSettings":{ "name":"ValidateConfigurationSettings", "http":{ @@ -840,7 +874,7 @@ }, "NextToken":{ "shape":"Token", - "documentation":"

For a paginated request, the token that you can pass in a subsequent request to get the next page.

" + "documentation":"

In a paginated request, the token that you can pass in a subsequent request to get the next response page.

" } }, "documentation":"

Result message wrapping a list of application version descriptions.

" @@ -1196,7 +1230,7 @@ }, "PlatformArn":{ "shape":"PlatformArn", - "documentation":"

The ARN of the custom platform.

" + "documentation":"

The ARN of the platform.

" }, "Options":{ "shape":"ConfigurationOptionDescriptionsList", @@ -1214,7 +1248,7 @@ }, "PlatformArn":{ "shape":"PlatformArn", - "documentation":"

The ARN of the custom platform.

" + "documentation":"

The ARN of the platform.

" }, "ApplicationName":{ "shape":"ApplicationName", @@ -1397,7 +1431,7 @@ }, "EnvironmentName":{ "shape":"EnvironmentName", - "documentation":"

A unique name for the deployment environment. Used in the application URL.

Constraint: Must be from 4 to 40 characters in length. The name can contain only letters, numbers, and hyphens. It cannot start or end with a hyphen. This name must be unique in your account. If the specified name already exists, AWS Elastic Beanstalk returns an InvalidParameterValue error.

Default: If the CNAME parameter is not specified, the environment name becomes part of the CNAME, and therefore part of the visible URL for your application.

" + "documentation":"

A unique name for the deployment environment. Used in the application URL.

Constraint: Must be from 4 to 40 characters in length. The name can contain only letters, numbers, and hyphens. It cannot start or end with a hyphen. This name must be unique within a region in your account. If the specified name already exists in the region, AWS Elastic Beanstalk returns an InvalidParameterValue error.

Default: If the CNAME parameter is not specified, the environment name becomes part of the CNAME, and therefore part of the visible URL for your application.

" }, "GroupName":{ "shape":"GroupName", @@ -1433,7 +1467,7 @@ }, "PlatformArn":{ "shape":"PlatformArn", - "documentation":"

The ARN of the custom platform.

" + "documentation":"

The ARN of the platform.

" }, "OptionSettings":{ "shape":"ConfigurationOptionSettingsList", @@ -1638,7 +1672,7 @@ }, "DeploymentTime":{ "shape":"DeploymentTimestamp", - "documentation":"

For in-progress deployments, the time that the deloyment started.

For completed deployments, the time that the deployment ended.

" + "documentation":"

For in-progress deployments, the time that the deployment started.

For completed deployments, the time that the deployment ended.

" } }, "documentation":"

Information about an application version deployment.

" @@ -1657,11 +1691,11 @@ }, "MaxRecords":{ "shape":"MaxRecords", - "documentation":"

Specify a maximum number of application versions to paginate in the request.

" + "documentation":"

For a paginated request. Specify a maximum number of application versions to include in each response.

If no MaxRecords is specified, all available application versions are retrieved in a single response.

" }, "NextToken":{ "shape":"Token", - "documentation":"

Specify a next token to retrieve the next page in a paginated request.

" + "documentation":"

For a paginated request. Specify a token from a previous response page to retrieve the next response page. All other parameter values must be identical to the ones specified in the initial request.

If no NextToken is specified, the first page is retrieved.

" } }, "documentation":"

Request to describe application versions.

" @@ -1885,6 +1919,14 @@ "IncludedDeletedBackTo":{ "shape":"IncludeDeletedBackTo", "documentation":"

If specified when IncludeDeleted is set to true, then environments deleted after this date are displayed.

" + }, + "MaxRecords":{ + "shape":"MaxRecords", + "documentation":"

For a paginated request. Specify a maximum number of environments to include in each response.

If no MaxRecords is specified, all available environments are retrieved in a single response.

" + }, + "NextToken":{ + "shape":"Token", + "documentation":"

For a paginated request. Specify a token from a previous response page to retrieve the next response page. All other parameter values must be identical to the ones specified in the initial request.

If no NextToken is specified, the first page is retrieved.

" } }, "documentation":"

Request to describe one or more environments.

" @@ -2018,6 +2060,7 @@ "exception":true }, "EndpointURL":{"type":"string"}, + "EnvironmentArn":{"type":"string"}, "EnvironmentDescription":{ "type":"structure", "members":{ @@ -2043,7 +2086,7 @@ }, "PlatformArn":{ "shape":"PlatformArn", - "documentation":"

The ARN of the custom platform.

" + "documentation":"

The ARN of the platform.

" }, "TemplateName":{ "shape":"ConfigurationTemplateName", @@ -2096,6 +2139,10 @@ "EnvironmentLinks":{ "shape":"EnvironmentLinks", "documentation":"

A list of links to other environments in the same group.

" + }, + "EnvironmentArn":{ + "shape":"EnvironmentArn", + "documentation":"

The environment's Amazon Resource Name (ARN), which can be used in other API reuqests that require an ARN.

" } }, "documentation":"

Describes the properties of an environment.

" @@ -2110,6 +2157,10 @@ "Environments":{ "shape":"EnvironmentDescriptionsList", "documentation":"

Returns an EnvironmentDescription list.

" + }, + "NextToken":{ + "shape":"Token", + "documentation":"

In a paginated request, the token that you can pass in a subsequent request to get the next response page.

" } }, "documentation":"

Result message containing a list of environment descriptions.

" @@ -2330,7 +2381,7 @@ }, "PlatformArn":{ "shape":"PlatformArn", - "documentation":"

The ARN of the custom platform.

" + "documentation":"

The ARN of the platform.

" }, "RequestId":{ "shape":"RequestId", @@ -2603,6 +2654,16 @@ } } }, + "ListTagsForResourceMessage":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{ + "shape":"ResourceArn", + "documentation":"

The Amazon Resource Name (ARN) of the resouce for which a tag list is requested.

Must be the ARN of an Elastic Beanstalk environment.

" + } + } + }, "Listener":{ "type":"structure", "members":{ @@ -3129,12 +3190,50 @@ "documentation":"

Request to retrieve logs from an environment and store them in your Elastic Beanstalk storage bucket.

" }, "RequestId":{"type":"string"}, + "ResourceArn":{"type":"string"}, "ResourceId":{"type":"string"}, "ResourceName":{ "type":"string", "max":256, "min":1 }, + "ResourceNotFoundException":{ + "type":"structure", + "members":{ + }, + "documentation":"

A resource doesn't exist for the specified Amazon Resource Name (ARN).

", + "error":{ + "code":"ResourceNotFoundException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "ResourceTagsDescriptionMessage":{ + "type":"structure", + "members":{ + "ResourceArn":{ + "shape":"ResourceArn", + "documentation":"

The Amazon Resource Name (ARN) of the resouce for which a tag list was requested.

" + }, + "ResourceTags":{ + "shape":"TagList", + "documentation":"

A list of tag key-value pairs.

" + } + } + }, + "ResourceTypeNotSupportedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The type of the specified Amazon Resource Name (ARN) isn't supported for this operation.

", + "error":{ + "code":"ResourceTypeNotSupportedException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "RestartAppServerMessage":{ "type":"structure", "members":{ @@ -3447,6 +3546,14 @@ "max":128, "min":1 }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"} + }, + "TagList":{ + "type":"list", + "member":{"shape":"Tag"} + }, "TagValue":{ "type":"string", "max":256, @@ -3551,6 +3658,18 @@ }, "exception":true }, + "TooManyTagsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The number of tags in the resource would exceed the number of tags that each resource can have.

To calculate this, the operation considers both the number of tags the resource already has and the tags this operation would add if it succeeded.

", + "error":{ + "code":"TooManyTagsException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "Trigger":{ "type":"structure", "members":{ @@ -3704,6 +3823,24 @@ }, "documentation":"

Request to update an environment.

" }, + "UpdateTagsForResourceMessage":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{ + "shape":"ResourceArn", + "documentation":"

The Amazon Resource Name (ARN) of the resouce to be updated.

Must be the ARN of an Elastic Beanstalk environment.

" + }, + "TagsToAdd":{ + "shape":"TagList", + "documentation":"

A list of tags to add or update.

If a key of an existing tag is added, the tag's value is updated.

" + }, + "TagsToRemove":{ + "shape":"TagKeyList", + "documentation":"

A list of tag keys to remove.

If a tag key doesn't exist, it is silently ignored.

" + } + } + }, "UserDefinedOption":{"type":"boolean"}, "ValidateConfigurationSettingsMessage":{ "type":"structure", diff --git a/services/elasticloadbalancingv2/src/main/resources/codegen-resources/service-2.json b/services/elasticloadbalancingv2/src/main/resources/codegen-resources/service-2.json index 402df0843906..f0e403896d99 100644 --- a/services/elasticloadbalancingv2/src/main/resources/codegen-resources/service-2.json +++ b/services/elasticloadbalancingv2/src/main/resources/codegen-resources/service-2.json @@ -6,11 +6,30 @@ "protocol":"query", "serviceAbbreviation":"Elastic Load Balancing v2", "serviceFullName":"Elastic Load Balancing", + "serviceId":"Elastic Load Balancing v2", "signatureVersion":"v4", "uid":"elasticloadbalancingv2-2015-12-01", "xmlNamespace":"http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/" }, "operations":{ + "AddListenerCertificates":{ + "name":"AddListenerCertificates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AddListenerCertificatesInput"}, + "output":{ + "shape":"AddListenerCertificatesOutput", + "resultWrapper":"AddListenerCertificatesResult" + }, + "errors":[ + {"shape":"ListenerNotFoundException"}, + {"shape":"TooManyCertificatesException"}, + {"shape":"CertificateNotFoundException"} + ], + "documentation":"

Adds the specified certificate to the specified secure listener.

If the certificate was already added, the call is successful but the certificate is not added again.

To list the certificates for your listener, use DescribeListenerCertificates. To remove certificates from your listener, use RemoveListenerCertificates.

" + }, "AddTags":{ "name":"AddTags", "http":{ @@ -28,7 +47,7 @@ {"shape":"LoadBalancerNotFoundException"}, {"shape":"TargetGroupNotFoundException"} ], - "documentation":"

Adds the specified tags to the specified resource. You can tag your Application Load Balancers and your target groups.

Each tag consists of a key and an optional value. If a resource already has a tag with the same key, AddTags updates its value.

To list the current tags for your resources, use DescribeTags. To remove tags from your resources, use RemoveTags.

" + "documentation":"

Adds the specified tags to the specified Elastic Load Balancing resource. You can tag your Application Load Balancers, Network Load Balancers, and your target groups.

Each tag consists of a key and an optional value. If a resource already has a tag with the same key, AddTags updates its value.

To list the current tags for your resources, use DescribeTags. To remove tags from your resources, use RemoveTags.

" }, "CreateListener":{ "name":"CreateListener", @@ -53,9 +72,10 @@ {"shape":"SSLPolicyNotFoundException"}, {"shape":"CertificateNotFoundException"}, {"shape":"UnsupportedProtocolException"}, - {"shape":"TooManyRegistrationsForTargetIdException"} + {"shape":"TooManyRegistrationsForTargetIdException"}, + {"shape":"TooManyTargetsException"} ], - "documentation":"

Creates a listener for the specified Application Load Balancer.

You can create up to 10 listeners per load balancer.

To update a listener, use ModifyListener. When you are finished with a listener, you can delete it using DeleteListener. If you are finished with both the listener and the load balancer, you can delete them both using DeleteLoadBalancer.

For more information, see Listeners for Your Application Load Balancers in the Application Load Balancers Guide.

" + "documentation":"

Creates a listener for the specified Application Load Balancer or Network Load Balancer.

To update a listener, use ModifyListener. When you are finished with a listener, you can delete it using DeleteListener. If you are finished with both the listener and the load balancer, you can delete them both using DeleteLoadBalancer.

This operation is idempotent, which means that it completes at most one time. If you attempt to create multiple listeners with the same settings, each call succeeds.

For more information, see Listeners for Your Application Load Balancers in the Application Load Balancers Guide and Listeners for Your Network Load Balancers in the Network Load Balancers Guide.

" }, "CreateLoadBalancer":{ "name":"CreateLoadBalancer", @@ -77,9 +97,12 @@ {"shape":"InvalidSecurityGroupException"}, {"shape":"InvalidSchemeException"}, {"shape":"TooManyTagsException"}, - {"shape":"DuplicateTagKeysException"} + {"shape":"DuplicateTagKeysException"}, + {"shape":"ResourceInUseException"}, + {"shape":"AllocationIdNotFoundException"}, + {"shape":"AvailabilityZoneNotSupportedException"} ], - "documentation":"

Creates an Application Load Balancer.

When you create a load balancer, you can specify security groups, subnets, IP address type, and tags. Otherwise, you could do so later using SetSecurityGroups, SetSubnets, SetIpAddressType, and AddTags.

To create listeners for your load balancer, use CreateListener. To describe your current load balancers, see DescribeLoadBalancers. When you are finished with a load balancer, you can delete it using DeleteLoadBalancer.

You can create up to 20 load balancers per region per account. You can request an increase for the number of load balancers for your account. For more information, see Limits for Your Application Load Balancer in the Application Load Balancers Guide.

For more information, see Application Load Balancers in the Application Load Balancers Guide.

" + "documentation":"

Creates an Application Load Balancer or a Network Load Balancer.

When you create a load balancer, you can specify security groups, subnets, IP address type, and tags. Otherwise, you could do so later using SetSecurityGroups, SetSubnets, SetIpAddressType, and AddTags.

To create listeners for your load balancer, use CreateListener. To describe your current load balancers, see DescribeLoadBalancers. When you are finished with a load balancer, you can delete it using DeleteLoadBalancer.

For limit information, see Limits for Your Application Load Balancer in the Application Load Balancers Guide and Limits for Your Network Load Balancer in the Network Load Balancers Guide.

This operation is idempotent, which means that it completes at most one time. If you attempt to create multiple load balancers with the same settings, each call succeeds.

For more information, see Application Load Balancers in the Application Load Balancers Guide and Network Load Balancers in the Network Load Balancers Guide.

" }, "CreateRule":{ "name":"CreateRule", @@ -97,12 +120,14 @@ {"shape":"TooManyTargetGroupsException"}, {"shape":"TooManyRulesException"}, {"shape":"TargetGroupAssociationLimitException"}, + {"shape":"IncompatibleProtocolsException"}, {"shape":"ListenerNotFoundException"}, {"shape":"TargetGroupNotFoundException"}, {"shape":"InvalidConfigurationRequestException"}, - {"shape":"TooManyRegistrationsForTargetIdException"} + {"shape":"TooManyRegistrationsForTargetIdException"}, + {"shape":"TooManyTargetsException"} ], - "documentation":"

Creates a rule for the specified listener.

Each rule can have one action and one condition. Rules are evaluated in priority order, from the lowest value to the highest value. When the condition for a rule is met, the specified action is taken. If no conditions are met, the default action for the default rule is taken. For more information, see Listener Rules in the Application Load Balancers Guide.

To view your current rules, use DescribeRules. To update a rule, use ModifyRule. To set the priorities of your rules, use SetRulePriorities. To delete a rule, use DeleteRule.

" + "documentation":"

Creates a rule for the specified listener. The listener must be associated with an Application Load Balancer.

Rules are evaluated in priority order, from the lowest value to the highest value. When the condition for a rule is met, the specified action is taken. If no conditions are met, the action for the default rule is taken. For more information, see Listener Rules in the Application Load Balancers Guide.

To view your current rules, use DescribeRules. To update a rule, use ModifyRule. To set the priorities of your rules, use SetRulePriorities. To delete a rule, use DeleteRule.

" }, "CreateTargetGroup":{ "name":"CreateTargetGroup", @@ -117,9 +142,10 @@ }, "errors":[ {"shape":"DuplicateTargetGroupNameException"}, - {"shape":"TooManyTargetGroupsException"} + {"shape":"TooManyTargetGroupsException"}, + {"shape":"InvalidConfigurationRequestException"} ], - "documentation":"

Creates a target group.

To register targets with the target group, use RegisterTargets. To update the health check settings for the target group, use ModifyTargetGroup. To monitor the health of targets in the target group, use DescribeTargetHealth.

To route traffic to the targets in a target group, specify the target group in an action using CreateListener or CreateRule.

To delete a target group, use DeleteTargetGroup.

For more information, see Target Groups for Your Application Load Balancers in the Application Load Balancers Guide.

" + "documentation":"

Creates a target group.

To register targets with the target group, use RegisterTargets. To update the health check settings for the target group, use ModifyTargetGroup. To monitor the health of targets in the target group, use DescribeTargetHealth.

To route traffic to the targets in a target group, specify the target group in an action using CreateListener or CreateRule.

To delete a target group, use DeleteTargetGroup.

This operation is idempotent, which means that it completes at most one time. If you attempt to create multiple target groups with the same settings, each call succeeds.

For more information, see Target Groups for Your Application Load Balancers in the Application Load Balancers Guide or Target Groups for Your Network Load Balancers in the Network Load Balancers Guide.

" }, "DeleteListener":{ "name":"DeleteListener", @@ -150,9 +176,10 @@ }, "errors":[ {"shape":"LoadBalancerNotFoundException"}, - {"shape":"OperationNotPermittedException"} + {"shape":"OperationNotPermittedException"}, + {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes the specified Application Load Balancer and its attached listeners.

You can't delete a load balancer if deletion protection is enabled. If the load balancer does not exist or has already been deleted, the call succeeds.

Deleting a load balancer does not affect its registered targets. For example, your EC2 instances continue to run and are still registered to their target groups. If you no longer need these EC2 instances, you can stop or terminate them.

" + "documentation":"

Deletes the specified Application Load Balancer or Network Load Balancer and its attached listeners.

You can't delete a load balancer if deletion protection is enabled. If the load balancer does not exist or has already been deleted, the call succeeds.

Deleting a load balancer does not affect its registered targets. For example, your EC2 instances continue to run and are still registered to their target groups. If you no longer need these EC2 instances, you can stop or terminate them.

" }, "DeleteRule":{ "name":"DeleteRule", @@ -215,7 +242,23 @@ "shape":"DescribeAccountLimitsOutput", "resultWrapper":"DescribeAccountLimitsResult" }, - "documentation":"

Describes the current Elastic Load Balancing resource limits for your AWS account.

For more information, see Limits for Your Application Load Balancer in the Application Load Balancer Guide.

" + "documentation":"

Describes the current Elastic Load Balancing resource limits for your AWS account.

For more information, see Limits for Your Application Load Balancers in the Application Load Balancer Guide or Limits for Your Network Load Balancers in the Network Load Balancers Guide.

" + }, + "DescribeListenerCertificates":{ + "name":"DescribeListenerCertificates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeListenerCertificatesInput"}, + "output":{ + "shape":"DescribeListenerCertificatesOutput", + "resultWrapper":"DescribeListenerCertificatesResult" + }, + "errors":[ + {"shape":"ListenerNotFoundException"} + ], + "documentation":"

Describes the certificates for the specified secure listener.

" }, "DescribeListeners":{ "name":"DescribeListeners", @@ -232,7 +275,7 @@ {"shape":"ListenerNotFoundException"}, {"shape":"LoadBalancerNotFoundException"} ], - "documentation":"

Describes the specified listeners or the listeners for the specified Application Load Balancer. You must specify either a load balancer or one or more listeners.

" + "documentation":"

Describes the specified listeners or the listeners for the specified Application Load Balancer or Network Load Balancer. You must specify either a load balancer or one or more listeners.

" }, "DescribeLoadBalancerAttributes":{ "name":"DescribeLoadBalancerAttributes", @@ -248,7 +291,7 @@ "errors":[ {"shape":"LoadBalancerNotFoundException"} ], - "documentation":"

Describes the attributes for the specified Application Load Balancer.

" + "documentation":"

Describes the attributes for the specified Application Load Balancer or Network Load Balancer.

" }, "DescribeLoadBalancers":{ "name":"DescribeLoadBalancers", @@ -264,7 +307,7 @@ "errors":[ {"shape":"LoadBalancerNotFoundException"} ], - "documentation":"

Describes the specified Application Load Balancers or all of your Application Load Balancers.

To describe the listeners for a load balancer, use DescribeListeners. To describe the attributes for a load balancer, use DescribeLoadBalancerAttributes.

" + "documentation":"

Describes the specified load balancers or all of your load balancers.

To describe the listeners for a load balancer, use DescribeListeners. To describe the attributes for a load balancer, use DescribeLoadBalancerAttributes.

" }, "DescribeRules":{ "name":"DescribeRules", @@ -316,7 +359,7 @@ {"shape":"ListenerNotFoundException"}, {"shape":"RuleNotFoundException"} ], - "documentation":"

Describes the tags for the specified resources. You can describe the tags for one or more Application Load Balancers and target groups.

" + "documentation":"

Describes the tags for the specified resources. You can describe the tags for one or more Application Load Balancers, Network Load Balancers, and target groups.

" }, "DescribeTargetGroupAttributes":{ "name":"DescribeTargetGroupAttributes", @@ -392,7 +435,8 @@ {"shape":"CertificateNotFoundException"}, {"shape":"InvalidConfigurationRequestException"}, {"shape":"UnsupportedProtocolException"}, - {"shape":"TooManyRegistrationsForTargetIdException"} + {"shape":"TooManyRegistrationsForTargetIdException"}, + {"shape":"TooManyTargetsException"} ], "documentation":"

Modifies the specified properties of the specified listener.

Any properties that you do not specify retain their current values. However, changing the protocol from HTTPS to HTTP removes the security policy and SSL certificate properties. If you change the protocol from HTTP to HTTPS, you must add the security policy and server certificate.

" }, @@ -411,7 +455,7 @@ {"shape":"LoadBalancerNotFoundException"}, {"shape":"InvalidConfigurationRequestException"} ], - "documentation":"

Modifies the specified attributes of the specified Application Load Balancer.

If any of the specified attributes can't be modified as requested, the call fails. Any existing attributes that you do not modify retain their current values.

" + "documentation":"

Modifies the specified attributes of the specified Application Load Balancer or Network Load Balancer.

If any of the specified attributes can't be modified as requested, the call fails. Any existing attributes that you do not modify retain their current values.

" }, "ModifyRule":{ "name":"ModifyRule", @@ -426,6 +470,7 @@ }, "errors":[ {"shape":"TargetGroupAssociationLimitException"}, + {"shape":"IncompatibleProtocolsException"}, {"shape":"RuleNotFoundException"}, {"shape":"OperationNotPermittedException"}, {"shape":"TooManyRegistrationsForTargetIdException"}, @@ -446,7 +491,8 @@ "resultWrapper":"ModifyTargetGroupResult" }, "errors":[ - {"shape":"TargetGroupNotFoundException"} + {"shape":"TargetGroupNotFoundException"}, + {"shape":"InvalidConfigurationRequestException"} ], "documentation":"

Modifies the health checks used when evaluating the health state of the targets in the specified target group.

To monitor the health of the targets, use DescribeTargetHealth.

" }, @@ -462,7 +508,8 @@ "resultWrapper":"ModifyTargetGroupAttributesResult" }, "errors":[ - {"shape":"TargetGroupNotFoundException"} + {"shape":"TargetGroupNotFoundException"}, + {"shape":"InvalidConfigurationRequestException"} ], "documentation":"

Modifies the specified attributes of the specified target group.

" }, @@ -483,7 +530,24 @@ {"shape":"InvalidTargetException"}, {"shape":"TooManyRegistrationsForTargetIdException"} ], - "documentation":"

Registers the specified targets with the specified target group.

By default, the load balancer routes requests to registered targets using the protocol and port number for the target group. Alternatively, you can override the port for a target when you register it.

The target must be in the virtual private cloud (VPC) that you specified for the target group. If the target is an EC2 instance, it must be in the running state when you register it.

To remove a target from a target group, use DeregisterTargets.

" + "documentation":"

Registers the specified targets with the specified target group.

You can register targets by instance ID or by IP address. If the target is an EC2 instance, it must be in the running state when you register it.

By default, the load balancer routes requests to registered targets using the protocol and port for the target group. Alternatively, you can override the port for a target when you register it. You can register each EC2 instance or IP address with the same target group multiple times using different ports.

With a Network Load Balancer, you cannot register instances by instance ID if they have the following instance types: C1, CC1, CC2, CG1, CG2, CR1, CS1, G1, G2, HI1, HS1, M1, M2, M3, and T1. You can register instances of these types by IP address.

To remove a target from a target group, use DeregisterTargets.

" + }, + "RemoveListenerCertificates":{ + "name":"RemoveListenerCertificates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RemoveListenerCertificatesInput"}, + "output":{ + "shape":"RemoveListenerCertificatesOutput", + "resultWrapper":"RemoveListenerCertificatesResult" + }, + "errors":[ + {"shape":"ListenerNotFoundException"}, + {"shape":"OperationNotPermittedException"} + ], + "documentation":"

Removes the specified certificate from the specified secure listener.

You can't remove the default certificate for a listener. To replace the default certificate, call ModifyListener.

To list the certificates for your listener, use DescribeListenerCertificates.

" }, "RemoveTags":{ "name":"RemoveTags", @@ -503,7 +567,7 @@ {"shape":"RuleNotFoundException"}, {"shape":"TooManyTagsException"} ], - "documentation":"

Removes the specified tags from the specified resource.

To list the current tags for your resources, use DescribeTags.

" + "documentation":"

Removes the specified tags from the specified Elastic Load Balancing resource.

To list the current tags for your resources, use DescribeTags.

" }, "SetIpAddressType":{ "name":"SetIpAddressType", @@ -521,7 +585,7 @@ {"shape":"InvalidConfigurationRequestException"}, {"shape":"InvalidSubnetException"} ], - "documentation":"

Sets the type of IP addresses used by the subnets of the specified Application Load Balancer.

" + "documentation":"

Sets the type of IP addresses used by the subnets of the specified Application Load Balancer or Network Load Balancer.

Note that Network Load Balancers must use ipv4.

" }, "SetRulePriorities":{ "name":"SetRulePriorities", @@ -557,7 +621,7 @@ {"shape":"InvalidConfigurationRequestException"}, {"shape":"InvalidSecurityGroupException"} ], - "documentation":"

Associates the specified security groups with the specified load balancer. The specified security groups override the previously associated security groups.

" + "documentation":"

Associates the specified security groups with the specified Application Load Balancer. The specified security groups override the previously associated security groups.

Note that you can't specify a security group for a Network Load Balancer.

" }, "SetSubnets":{ "name":"SetSubnets", @@ -574,9 +638,11 @@ {"shape":"LoadBalancerNotFoundException"}, {"shape":"InvalidConfigurationRequestException"}, {"shape":"SubnetNotFoundException"}, - {"shape":"InvalidSubnetException"} + {"shape":"InvalidSubnetException"}, + {"shape":"AllocationIdNotFoundException"}, + {"shape":"AvailabilityZoneNotSupportedException"} ], - "documentation":"

Enables the Availability Zone for the specified subnets for the specified load balancer. The specified subnets replace the previously enabled subnets.

" + "documentation":"

Enables the Availability Zone for the specified subnets for the specified Application Load Balancer. The specified subnets replace the previously enabled subnets.

Note that you can't change the subnets for a Network Load Balancer.

" } }, "shapes":{ @@ -606,6 +672,32 @@ "type":"list", "member":{"shape":"Action"} }, + "AddListenerCertificatesInput":{ + "type":"structure", + "required":[ + "ListenerArn", + "Certificates" + ], + "members":{ + "ListenerArn":{ + "shape":"ListenerArn", + "documentation":"

The Amazon Resource Name (ARN) of the listener.

" + }, + "Certificates":{ + "shape":"CertificateList", + "documentation":"

The certificate to add. You can specify one certificate per call.

" + } + } + }, + "AddListenerCertificatesOutput":{ + "type":"structure", + "members":{ + "Certificates":{ + "shape":"CertificateList", + "documentation":"

Information about the certificates.

" + } + } + }, "AddTagsInput":{ "type":"structure", "required":[ @@ -628,6 +720,19 @@ "members":{ } }, + "AllocationId":{"type":"string"}, + "AllocationIdNotFoundException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified allocation ID does not exist.

", + "error":{ + "code":"AllocationIdNotFound", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "AvailabilityZone":{ "type":"structure", "members":{ @@ -638,10 +743,26 @@ "SubnetId":{ "shape":"SubnetId", "documentation":"

The ID of the subnet.

" + }, + "LoadBalancerAddresses":{ + "shape":"LoadBalancerAddresses", + "documentation":"

[Network Load Balancers] The static IP address.

" } }, "documentation":"

Information about an Availability Zone.

" }, + "AvailabilityZoneNotSupportedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The specified Availability Zone is not supported.

", + "error":{ + "code":"AvailabilityZoneNotSupported", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "AvailabilityZones":{ "type":"list", "member":{"shape":"AvailabilityZone"} @@ -653,9 +774,13 @@ "CertificateArn":{ "shape":"CertificateArn", "documentation":"

The Amazon Resource Name (ARN) of the certificate.

" + }, + "IsDefault":{ + "shape":"Default", + "documentation":"

Indicates whether the certificate is the default certificate.

" } }, - "documentation":"

Information about an SSL server certificate deployed on a load balancer.

" + "documentation":"

Information about an SSL server certificate.

" }, "CertificateArn":{"type":"string"}, "CertificateList":{ @@ -713,7 +838,7 @@ }, "Protocol":{ "shape":"ProtocolEnum", - "documentation":"

The protocol for connections from clients to the load balancer.

" + "documentation":"

The protocol for connections from clients to the load balancer. For Application Load Balancers, the supported protocols are HTTP and HTTPS. For Network Load Balancers, the supported protocol is TCP.

" }, "Port":{ "shape":"Port", @@ -721,15 +846,15 @@ }, "SslPolicy":{ "shape":"SslPolicyName", - "documentation":"

The security policy that defines which ciphers and protocols are supported. The default is the current predefined security policy.

" + "documentation":"

[HTTPS listeners] The security policy that defines which ciphers and protocols are supported. The default is the current predefined security policy.

" }, "Certificates":{ "shape":"CertificateList", - "documentation":"

The SSL server certificate. You must provide exactly one certificate if the protocol is HTTPS.

" + "documentation":"

[HTTPS listeners] The SSL server certificate. You must provide exactly one certificate.

" }, "DefaultActions":{ "shape":"Actions", - "documentation":"

The default action for the listener.

" + "documentation":"

The default action for the listener. For Application Load Balancers, the protocol of the specified target group must be HTTP or HTTPS. For Network Load Balancers, the protocol of the specified target group must be TCP.

" } } }, @@ -744,10 +869,7 @@ }, "CreateLoadBalancerInput":{ "type":"structure", - "required":[ - "Name", - "Subnets" - ], + "required":["Name"], "members":{ "Name":{ "shape":"LoadBalancerName", @@ -755,11 +877,15 @@ }, "Subnets":{ "shape":"Subnets", - "documentation":"

The IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify subnets from at least two Availability Zones.

" + "documentation":"

The IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify either subnets or subnet mappings.

[Application Load Balancers] You must specify subnets from at least two Availability Zones.

[Network Load Balancers] You can specify subnets from one or more Availability Zones.

" + }, + "SubnetMappings":{ + "shape":"SubnetMappings", + "documentation":"

The IDs of the subnets to attach to the load balancer. You can specify only one subnet per Availability Zone. You must specify either subnets or subnet mappings.

[Application Load Balancers] You must specify subnets from at least two Availability Zones. You cannot specify Elastic IP addresses for your subnets.

[Network Load Balancers] You can specify subnets from one or more Availability Zones. You can specify one Elastic IP address per subnet.

" }, "SecurityGroups":{ "shape":"SecurityGroups", - "documentation":"

The IDs of the security groups to assign to the load balancer.

" + "documentation":"

[Application Load Balancers] The IDs of the security groups to assign to the load balancer.

" }, "Scheme":{ "shape":"LoadBalancerSchemeEnum", @@ -769,9 +895,13 @@ "shape":"TagList", "documentation":"

One or more tags to assign to the load balancer.

" }, + "Type":{ + "shape":"LoadBalancerTypeEnum", + "documentation":"

The type of load balancer to create. The default is application.

" + }, "IpAddressType":{ "shape":"IpAddressType", - "documentation":"

The type of IP addresses used by the subnets for your load balancer. The possible values are ipv4 (for IPv4 addresses) and dualstack (for IPv4 and IPv6 addresses). Internal load balancers must use ipv4.

" + "documentation":"

[Application Load Balancers] The type of IP addresses used by the subnets for your load balancer. The possible values are ipv4 (for IPv4 addresses) and dualstack (for IPv4 and IPv6 addresses). Internal load balancers must use ipv4.

" } } }, @@ -799,7 +929,7 @@ }, "Conditions":{ "shape":"RuleConditionList", - "documentation":"

A condition. Each condition specifies a field name and a single value.

If the field name is host-header, you can specify a single host name (for example, my.example.com). A host name is case insensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.

If the field name is path-pattern, you can specify a single path pattern. A path pattern is case sensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.

" + "documentation":"

The conditions. Each condition specifies a field name and a single value.

If the field name is host-header, you can specify a single host name (for example, my.example.com). A host name is case insensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.

If the field name is path-pattern, you can specify a single path pattern. A path pattern is case sensitive, can be up to 128 characters in length, and can contain any of the following characters. Note that you can include up to three wildcard characters.

" }, "Priority":{ "shape":"RulePriority", @@ -835,7 +965,7 @@ }, "Protocol":{ "shape":"ProtocolEnum", - "documentation":"

The protocol to use for routing traffic to the targets.

" + "documentation":"

The protocol to use for routing traffic to the targets. For Application Load Balancers, the supported protocols are HTTP and HTTPS. For Network Load Balancers, the supported protocol is TCP.

" }, "Port":{ "shape":"Port", @@ -847,35 +977,39 @@ }, "HealthCheckProtocol":{ "shape":"ProtocolEnum", - "documentation":"

The protocol the load balancer uses when performing health checks on targets. The default is the HTTP protocol.

" + "documentation":"

The protocol the load balancer uses when performing health checks on targets. The TCP protocol is supported only if the protocol of the target group is TCP. For Application Load Balancers, the default is HTTP. For Network Load Balancers, the default is TCP.

" }, "HealthCheckPort":{ "shape":"HealthCheckPort", - "documentation":"

The port the load balancer uses when performing health checks on targets. The default is traffic-port, which indicates the port on which each target receives traffic from the load balancer.

" + "documentation":"

The port the load balancer uses when performing health checks on targets. The default is traffic-port, which is the port on which each target receives traffic from the load balancer.

" }, "HealthCheckPath":{ "shape":"Path", - "documentation":"

The ping path that is the destination on the targets for health checks. The default is /.

" + "documentation":"

[HTTP/HTTPS health checks] The ping path that is the destination on the targets for health checks. The default is /.

" }, "HealthCheckIntervalSeconds":{ "shape":"HealthCheckIntervalSeconds", - "documentation":"

The approximate amount of time, in seconds, between health checks of an individual target. The default is 30 seconds.

" + "documentation":"

The approximate amount of time, in seconds, between health checks of an individual target. For Application Load Balancers, the range is 5 to 300 seconds. For Network Load Balancers, the supported values are 10 or 30 seconds. The default is 30 seconds.

" }, "HealthCheckTimeoutSeconds":{ "shape":"HealthCheckTimeoutSeconds", - "documentation":"

The amount of time, in seconds, during which no response from a target means a failed health check. The default is 5 seconds.

" + "documentation":"

The amount of time, in seconds, during which no response from a target means a failed health check. For Application Load Balancers, the range is 2 to 60 seconds and the default is 5 seconds. For Network Load Balancers, this is 10 seconds for TCP and HTTPS health checks and 6 seconds for HTTP health checks.

" }, "HealthyThresholdCount":{ "shape":"HealthCheckThresholdCount", - "documentation":"

The number of consecutive health checks successes required before considering an unhealthy target healthy. The default is 5.

" + "documentation":"

The number of consecutive health checks successes required before considering an unhealthy target healthy. For Application Load Balancers, the default is 5. For Network Load Balancers, the default is 3.

" }, "UnhealthyThresholdCount":{ "shape":"HealthCheckThresholdCount", - "documentation":"

The number of consecutive health check failures required before considering a target unhealthy. The default is 2.

" + "documentation":"

The number of consecutive health check failures required before considering a target unhealthy. For Application Load Balancers, the default is 2. For Network Load Balancers, this value must be the same as the healthy threshold count.

" }, "Matcher":{ "shape":"Matcher", - "documentation":"

The HTTP codes to use when checking for a successful response from a target. The default is 200.

" + "documentation":"

[HTTP/HTTPS health checks] The HTTP codes to use when checking for a successful response from a target.

" + }, + "TargetType":{ + "shape":"TargetTypeEnum", + "documentation":"

The type of target that you must specify when registering targets with this target group. The possible values are instance (targets are specified by instance ID) or ip (targets are specified by IP address). The default is instance. Note that you can't specify targets for a target group using both instance IDs and IP addresses.

If the target type is ip, specify IP addresses from the subnets of the virtual private cloud (VPC) for the target group, the RFC 1918 range (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16), and the RFC 6598 range (100.64.0.0/10). You can't specify publicly routable IP addresses.

" } } }, @@ -890,6 +1024,7 @@ }, "CreatedTime":{"type":"timestamp"}, "DNSName":{"type":"string"}, + "Default":{"type":"boolean"}, "DeleteListenerInput":{ "type":"structure", "required":["ListenerArn"], @@ -998,6 +1133,37 @@ } } }, + "DescribeListenerCertificatesInput":{ + "type":"structure", + "required":["ListenerArn"], + "members":{ + "ListenerArn":{ + "shape":"ListenerArn", + "documentation":"

The Amazon Resource Names (ARN) of the listener.

" + }, + "Marker":{ + "shape":"Marker", + "documentation":"

The marker for the next set of results. (You received this marker from a previous call.)

" + }, + "PageSize":{ + "shape":"PageSize", + "documentation":"

The maximum number of results to return with this call.

" + } + } + }, + "DescribeListenerCertificatesOutput":{ + "type":"structure", + "members":{ + "Certificates":{ + "shape":"CertificateList", + "documentation":"

Information about the certificates.

" + }, + "NextMarker":{ + "shape":"Marker", + "documentation":"

The marker to use when requesting the next set of results. If there are no additional results, the string is empty.

" + } + } + }, "DescribeListenersInput":{ "type":"structure", "members":{ @@ -1389,7 +1555,7 @@ "type":"structure", "members":{ }, - "documentation":"

The specified target does not exist or is not in the same VPC as the target group.

", + "documentation":"

The specified target does not exist, is not in the same VPC as the target group, or has an unsupported instance type.

", "error":{ "code":"InvalidTarget", "httpStatusCode":400, @@ -1397,6 +1563,7 @@ }, "exception":true }, + "IpAddress":{"type":"string"}, "IpAddressType":{ "type":"string", "enum":[ @@ -1410,7 +1577,7 @@ "members":{ "Name":{ "shape":"Name", - "documentation":"

The name of the limit. The possible values are:

" + "documentation":"

The name of the limit. The possible values are:

" }, "Max":{ "shape":"Max", @@ -1536,6 +1703,24 @@ }, "documentation":"

Information about a load balancer.

" }, + "LoadBalancerAddress":{ + "type":"structure", + "members":{ + "IpAddress":{ + "shape":"IpAddress", + "documentation":"

The static IP address.

" + }, + "AllocationId":{ + "shape":"AllocationId", + "documentation":"

[Network Load Balancers] The allocation ID of the Elastic IP address.

" + } + }, + "documentation":"

Information about a static IP address for a load balancer.

" + }, + "LoadBalancerAddresses":{ + "type":"list", + "member":{"shape":"LoadBalancerAddress"} + }, "LoadBalancerArn":{"type":"string"}, "LoadBalancerArns":{ "type":"list", @@ -1546,7 +1731,7 @@ "members":{ "Key":{ "shape":"LoadBalancerAttributeKey", - "documentation":"

The name of the attribute.

" + "documentation":"

The name of the attribute.

" }, "Value":{ "shape":"LoadBalancerAttributeValue", @@ -1612,12 +1797,16 @@ "enum":[ "active", "provisioning", + "active_impaired", "failed" ] }, "LoadBalancerTypeEnum":{ "type":"string", - "enum":["application"] + "enum":[ + "application", + "network" + ] }, "LoadBalancers":{ "type":"list", @@ -1630,7 +1819,7 @@ "members":{ "HttpCode":{ "shape":"HttpCode", - "documentation":"

The HTTP codes. You can specify values between 200 and 499. The default value is 200. You can specify multiple values (for example, \"200,202\") or a range of values (for example, \"200-299\").

" + "documentation":"

The HTTP codes.

For Application Load Balancers, you can specify values between 200 and 499, and the default value is 200. You can specify multiple values (for example, \"200,202\") or a range of values (for example, \"200-299\").

For Network Load Balancers, this is 200 to 399.

" } }, "documentation":"

Information to use when checking for a successful response from a target.

" @@ -1650,7 +1839,7 @@ }, "Protocol":{ "shape":"ProtocolEnum", - "documentation":"

The protocol for connections from clients to the load balancer.

" + "documentation":"

The protocol for connections from clients to the load balancer. Application Load Balancers support HTTP and HTTPS and Network Load Balancers support TCP.

" }, "SslPolicy":{ "shape":"SslPolicyName", @@ -1658,11 +1847,11 @@ }, "Certificates":{ "shape":"CertificateList", - "documentation":"

The SSL server certificate.

" + "documentation":"

The default SSL server certificate.

" }, "DefaultActions":{ "shape":"Actions", - "documentation":"

The default actions.

" + "documentation":"

The default action. For Application Load Balancers, the protocol of the specified target group must be HTTP or HTTPS. For Network Load Balancers, the protocol of the specified target group must be TCP.

" } } }, @@ -1715,7 +1904,7 @@ }, "Actions":{ "shape":"Actions", - "documentation":"

The actions.

" + "documentation":"

The actions. The target group must use the HTTP or HTTPS protocol.

" } } }, @@ -1764,23 +1953,23 @@ }, "HealthCheckProtocol":{ "shape":"ProtocolEnum", - "documentation":"

The protocol to use to connect with the target.

" + "documentation":"

The protocol the load balancer uses when performing health checks on targets. The TCP protocol is supported only if the protocol of the target group is TCP.

" }, "HealthCheckPort":{ "shape":"HealthCheckPort", - "documentation":"

The port to use to connect with the target.

" + "documentation":"

The port the load balancer uses when performing health checks on targets.

" }, "HealthCheckPath":{ "shape":"Path", - "documentation":"

The ping path that is the destination for the health check request.

" + "documentation":"

[HTTP/HTTPS health checks] The ping path that is the destination for the health check request.

" }, "HealthCheckIntervalSeconds":{ "shape":"HealthCheckIntervalSeconds", - "documentation":"

The approximate amount of time, in seconds, between health checks of an individual target.

" + "documentation":"

The approximate amount of time, in seconds, between health checks of an individual target. For Application Load Balancers, the range is 5 to 300 seconds. For Network Load Balancers, the supported values are 10 or 30 seconds.

" }, "HealthCheckTimeoutSeconds":{ "shape":"HealthCheckTimeoutSeconds", - "documentation":"

The amount of time, in seconds, during which no response means a failed health check.

" + "documentation":"

[HTTP/HTTPS health checks] The amount of time, in seconds, during which no response means a failed health check.

" }, "HealthyThresholdCount":{ "shape":"HealthCheckThresholdCount", @@ -1788,11 +1977,11 @@ }, "UnhealthyThresholdCount":{ "shape":"HealthCheckThresholdCount", - "documentation":"

The number of consecutive health check failures required before considering the target unhealthy.

" + "documentation":"

The number of consecutive health check failures required before considering the target unhealthy. For Network Load Balancers, this value must be the same as the healthy threshold count.

" }, "Matcher":{ "shape":"Matcher", - "documentation":"

The HTTP codes to use when checking for a successful response from a target.

" + "documentation":"

[HTTP/HTTPS health checks] The HTTP codes to use when checking for a successful response from a target.

" } } }, @@ -1849,7 +2038,8 @@ "type":"string", "enum":[ "HTTP", - "HTTPS" + "HTTPS", + "TCP" ] }, "RegisterTargetsInput":{ @@ -1865,7 +2055,7 @@ }, "Targets":{ "shape":"TargetDescriptions", - "documentation":"

The targets. The default port for a target is the port for the target group. You can specify a port override. If a target is already registered, you can register it again using a different port.

" + "documentation":"

The targets.

" } } }, @@ -1874,6 +2064,28 @@ "members":{ } }, + "RemoveListenerCertificatesInput":{ + "type":"structure", + "required":[ + "ListenerArn", + "Certificates" + ], + "members":{ + "ListenerArn":{ + "shape":"ListenerArn", + "documentation":"

The Amazon Resource Name (ARN) of the listener.

" + }, + "Certificates":{ + "shape":"CertificateList", + "documentation":"

The certificate to remove. You can specify one certificate per call.

" + } + } + }, + "RemoveListenerCertificatesOutput":{ + "type":"structure", + "members":{ + } + }, "RemoveTagsInput":{ "type":"structure", "required":[ @@ -1976,7 +2188,7 @@ }, "RulePriority":{ "type":"integer", - "max":99999, + "max":50000, "min":1 }, "RulePriorityList":{ @@ -2102,7 +2314,11 @@ }, "Subnets":{ "shape":"Subnets", - "documentation":"

The IDs of the subnets. You must specify at least two subnets. You can add only one subnet per Availability Zone.

" + "documentation":"

The IDs of the subnets. You must specify subnets from at least two Availability Zones. You can specify only one subnet per Availability Zone. You must specify either subnets or subnet mappings.

" + }, + "SubnetMappings":{ + "shape":"SubnetMappings", + "documentation":"

The IDs of the subnets. You must specify subnets from at least two Availability Zones. You can specify only one subnet per Availability Zone. You must specify either subnets or subnet mappings.

You cannot specify Elastic IP addresses for your subnets.

" } } }, @@ -2151,6 +2367,24 @@ "String":{"type":"string"}, "StringValue":{"type":"string"}, "SubnetId":{"type":"string"}, + "SubnetMapping":{ + "type":"structure", + "members":{ + "SubnetId":{ + "shape":"SubnetId", + "documentation":"

The ID of the subnet.

" + }, + "AllocationId":{ + "shape":"AllocationId", + "documentation":"

[Network Load Balancers] The allocation ID of the Elastic IP address.

" + } + }, + "documentation":"

Information about a subnet mapping.

" + }, + "SubnetMappings":{ + "type":"list", + "member":{"shape":"SubnetMapping"} + }, "SubnetNotFoundException":{ "type":"structure", "members":{ @@ -2227,11 +2461,15 @@ "members":{ "Id":{ "shape":"TargetId", - "documentation":"

The ID of the target.

" + "documentation":"

The ID of the target. If the target type of the target group is instance, specify an instance ID. If the target type is ip, specify an IP address.

" }, "Port":{ "shape":"Port", "documentation":"

The port on which the target is listening.

" + }, + "AvailabilityZone":{ + "shape":"ZoneName", + "documentation":"

An Availability Zone or all. This determines whether the target receives traffic from the load balancer nodes in the specified Availability Zone or from all enabled Availability Zones for the load balancer.

This parameter is not supported if the target type of the target group is instance. If the IP address is in a subnet of the VPC for the target group, the Availability Zone is automatically detected and this parameter is optional. If the IP address is outside the VPC, this parameter is required.

With an Application Load Balancer, if the IP address is outside the VPC for the target group, the only supported value is all.

" } }, "documentation":"

Information about a target.

" @@ -2298,6 +2536,10 @@ "LoadBalancerArns":{ "shape":"LoadBalancerArns", "documentation":"

The Amazon Resource Names (ARN) of the load balancers that route traffic to this target group.

" + }, + "TargetType":{ + "shape":"TargetTypeEnum", + "documentation":"

The type of target that you must specify when registering targets with this target group. The possible values are instance (targets are specified by instance ID) or ip (targets are specified by IP address).

" } }, "documentation":"

Information about a target group.

" @@ -2324,7 +2566,7 @@ "members":{ "Key":{ "shape":"TargetGroupAttributeKey", - "documentation":"

The name of the attribute.

" + "documentation":"

The name of the attribute.

" }, "Value":{ "shape":"TargetGroupAttributeValue", @@ -2373,7 +2615,7 @@ }, "Reason":{ "shape":"TargetHealthReasonEnum", - "documentation":"

The reason code. If the target state is healthy, a reason code is not provided.

If the target state is initial, the reason code can be one of the following values:

If the target state is unhealthy, the reason code can be one of the following values:

If the target state is unused, the reason code can be one of the following values:

If the target state is draining, the reason code can be the following value:

" + "documentation":"

The reason code. If the target state is healthy, a reason code is not provided.

If the target state is initial, the reason code can be one of the following values:

If the target state is unhealthy, the reason code can be one of the following values:

If the target state is unused, the reason code can be one of the following values:

If the target state is draining, the reason code can be the following value:

" }, "Description":{ "shape":"Description", @@ -2416,6 +2658,7 @@ "Target.NotInUse", "Target.DeregistrationInProgress", "Target.InvalidState", + "Target.IpUnusable", "Elb.InternalError" ] }, @@ -2426,15 +2669,23 @@ "healthy", "unhealthy", "unused", - "draining" + "draining", + "unavailable" ] }, "TargetId":{"type":"string"}, + "TargetTypeEnum":{ + "type":"string", + "enum":[ + "instance", + "ip" + ] + }, "TooManyCertificatesException":{ "type":"structure", "members":{ }, - "documentation":"

You've reached the limit on the number of certificates per listener.

", + "documentation":"

You've reached the limit on the number of certificates per load balancer.

", "error":{ "code":"TooManyCertificates", "httpStatusCode":400, @@ -2541,5 +2792,5 @@ "VpcId":{"type":"string"}, "ZoneName":{"type":"string"} }, - "documentation":"Elastic Load Balancing

A load balancer distributes incoming traffic across targets, such as your EC2 instances. This enables you to increase the availability of your application. The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. You configure your load balancer to accept incoming traffic by specifying one or more listeners, which are configured with a protocol and port number for connections from clients to the load balancer. You configure a target group with a protocol and port number for connections from the load balancer to the targets, and with health check settings to be used when checking the health status of the targets.

Elastic Load Balancing supports two types of load balancers: Classic Load Balancers and Application Load Balancers. A Classic Load Balancer makes routing and load balancing decisions either at the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and supports either EC2-Classic or a VPC. An Application Load Balancer makes routing and load balancing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each EC2 instance or container instance in your virtual private cloud (VPC). For more information, see the Elastic Load Balancing User Guide.

This reference covers the 2015-12-01 API, which supports Application Load Balancers. The 2012-06-01 API supports Classic Load Balancers.

To get started, complete the following tasks:

  1. Create an Application Load Balancer using CreateLoadBalancer.

  2. Create a target group using CreateTargetGroup.

  3. Register targets for the target group using RegisterTargets.

  4. Create one or more listeners for your load balancer using CreateListener.

  5. (Optional) Create one or more rules for content routing based on URL using CreateRule.

To delete an Application Load Balancer and its related resources, complete the following tasks:

  1. Delete the load balancer using DeleteLoadBalancer.

  2. Delete the target group using DeleteTargetGroup.

All Elastic Load Balancing operations are idempotent, which means that they complete at most one time. If you repeat an operation, it succeeds.

" + "documentation":"Elastic Load Balancing

A load balancer distributes incoming traffic across targets, such as your EC2 instances. This enables you to increase the availability of your application. The load balancer also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets. You configure your load balancer to accept incoming traffic by specifying one or more listeners, which are configured with a protocol and port number for connections from clients to the load balancer. You configure a target group with a protocol and port number for connections from the load balancer to the targets, and with health check settings to be used when checking the health status of the targets.

Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load Balancers, and Classic Load Balancers.

An Application Load Balancer makes routing and load balancing decisions at the application layer (HTTP/HTTPS). A Network Load Balancer makes routing and load balancing decisions at the transport layer (TCP). Both Application Load Balancers and Network Load Balancers can route requests to one or more ports on each EC2 instance or container instance in your virtual private cloud (VPC).

A Classic Load Balancer makes routing and load balancing decisions either at the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS), and supports either EC2-Classic or a VPC. For more information, see the Elastic Load Balancing User Guide.

This reference covers the 2015-12-01 API, which supports Application Load Balancers and Network Load Balancers. The 2012-06-01 API supports Classic Load Balancers.

To get started, complete the following tasks:

  1. Create a load balancer using CreateLoadBalancer.

  2. Create a target group using CreateTargetGroup.

  3. Register targets for the target group using RegisterTargets.

  4. Create one or more listeners for your load balancer using CreateListener.

To delete a load balancer and its related resources, complete the following tasks:

  1. Delete the load balancer using DeleteLoadBalancer.

  2. Delete the target group using DeleteTargetGroup.

All Elastic Load Balancing operations are idempotent, which means that they complete at most one time. If you repeat an operation, it succeeds.

" } diff --git a/services/elasticloadbalancingv2/src/main/resources/codegen-resources/waiters-2.json b/services/elasticloadbalancingv2/src/main/resources/codegen-resources/waiters-2.json index b4e85719c602..9f3d77d828fa 100644 --- a/services/elasticloadbalancingv2/src/main/resources/codegen-resources/waiters-2.json +++ b/services/elasticloadbalancingv2/src/main/resources/codegen-resources/waiters-2.json @@ -59,6 +59,42 @@ "state": "success" } ] + }, + "TargetInService":{ + "delay":15, + "maxAttempts":40, + "operation":"DescribeTargetHealth", + "acceptors":[ + { + "argument":"TargetHealthDescriptions[].TargetHealth.State", + "expected":"healthy", + "matcher":"pathAll", + "state":"success" + }, + { + "matcher": "error", + "expected": "InvalidInstance", + "state": "retry" + } + ] + }, + "TargetDeregistered": { + "delay": 15, + "maxAttempts": 40, + "operation": "DescribeTargetHealth", + "acceptors": [ + { + "matcher": "error", + "expected": "InvalidTarget", + "state": "success" + }, + { + "argument":"TargetHealthDescriptions[].TargetHealth.State", + "expected":"unused", + "matcher":"pathAll", + "state":"success" + } + ] } } } diff --git a/services/elasticsearch/src/main/resources/codegen-resources/customization.config b/services/elasticsearch/src/main/resources/codegen-resources/customization.config index 713d8877032f..a45a8529a06c 100644 --- a/services/elasticsearch/src/main/resources/codegen-resources/customization.config +++ b/services/elasticsearch/src/main/resources/codegen-resources/customization.config @@ -1,5 +1,6 @@ { "authPolicyActions" : { "skip" : true - } + }, + "verifiedSimpleMethods" : ["deleteElasticsearchServiceRole"] } \ No newline at end of file diff --git a/services/elasticsearch/src/main/resources/codegen-resources/service-2.json b/services/elasticsearch/src/main/resources/codegen-resources/service-2.json index cc7d307a5fb6..fbeae08d7701 100644 --- a/services/elasticsearch/src/main/resources/codegen-resources/service-2.json +++ b/services/elasticsearch/src/main/resources/codegen-resources/service-2.json @@ -59,6 +59,19 @@ ], "documentation":"

Permanently deletes the specified Elasticsearch domain and all of its data. Once a domain is deleted, it cannot be recovered.

" }, + "DeleteElasticsearchServiceRole":{ + "name":"DeleteElasticsearchServiceRole", + "http":{ + "method":"DELETE", + "requestUri":"/2015-01-01/es/role" + }, + "errors":[ + {"shape":"BaseException"}, + {"shape":"InternalException"}, + {"shape":"ValidationException"} + ], + "documentation":"

Deletes the service-linked role that Elasticsearch Service uses to manage and maintain VPC domains. Role deletion will fail if any existing VPC domains use the role. You must delete any such Elasticsearch domains before deleting the role. See Deleting Elasticsearch Service Role in VPC Endpoints for Amazon Elasticsearch Service Domains.

" + }, "DescribeElasticsearchDomain":{ "name":"DescribeElasticsearchDomain", "http":{ @@ -313,6 +326,10 @@ "exception":true }, "Boolean":{"type":"boolean"}, + "CloudWatchLogsLogGroupArn":{ + "type":"string", + "documentation":"

ARN of the Cloudwatch log group to which log needs to be published.

" + }, "CreateElasticsearchDomainRequest":{ "type":"structure", "required":["DomainName"], @@ -341,9 +358,17 @@ "shape":"SnapshotOptions", "documentation":"

Option to set time, in UTC format, of the daily automated snapshot. Default value is 0 hours.

" }, + "VPCOptions":{ + "shape":"VPCOptions", + "documentation":"

Options to specify the subnets and security groups for VPC endpoint. For more information, see Creating a VPC in VPC Endpoints for Amazon Elasticsearch Service Domains

" + }, "AdvancedOptions":{ "shape":"AdvancedOptions", "documentation":"

Option to allow references to indices in an HTTP request body. Must be false when configuring access to individual sub-resources. By default, the value is true. See Configuration Advanced Options for more information.

" + }, + "LogPublishingOptions":{ + "shape":"LogPublishingOptions", + "documentation":"

Map of LogType and LogPublishingOption, each containing options to publish a given type of Elasticsearch log.

" } } }, @@ -601,7 +626,13 @@ "r4.2xlarge.elasticsearch", "r4.4xlarge.elasticsearch", "r4.8xlarge.elasticsearch", - "r4.16xlarge.elasticsearch" + "r4.16xlarge.elasticsearch", + "i3.large.elasticsearch", + "i3.xlarge.elasticsearch", + "i3.2xlarge.elasticsearch", + "i3.4xlarge.elasticsearch", + "i3.8xlarge.elasticsearch", + "i3.16xlarge.elasticsearch" ] }, "ElasticsearchClusterConfig":{ @@ -675,9 +706,17 @@ "shape":"SnapshotOptionsStatus", "documentation":"

Specifies the SnapshotOptions for the Elasticsearch domain.

" }, + "VPCOptions":{ + "shape":"VPCDerivedInfoStatus", + "documentation":"

The VPCOptions for the specified domain. For more information, see VPC Endpoints for Amazon Elasticsearch Service Domains.

" + }, "AdvancedOptions":{ "shape":"AdvancedOptionsStatus", "documentation":"

Specifies the AdvancedOptions for the domain. See Configuring Advanced Options for more information.

" + }, + "LogPublishingOptions":{ + "shape":"LogPublishingOptionsStatus", + "documentation":"

Log publishing options for the given domain.

" } }, "documentation":"

The configuration of an Elasticsearch domain.

" @@ -715,6 +754,10 @@ "shape":"ServiceUrl", "documentation":"

The Elasticsearch domain endpoint that you use to submit index and search requests.

" }, + "Endpoints":{ + "shape":"EndpointsMap", + "documentation":"

Map containing the Elasticsearch domain endpoints used to submit index and search requests. Example key, value: 'vpc','vpc-endpoint-h2dsd34efgyghrtguk5gt6j2foh4.us-east-1.es.amazonaws.com'.

" + }, "Processing":{ "shape":"Boolean", "documentation":"

The status of the Elasticsearch domain configuration. True if Amazon Elasticsearch Service is processing configuration changes. False if the configuration is active.

" @@ -736,9 +779,17 @@ "shape":"SnapshotOptions", "documentation":"

Specifies the status of the SnapshotOptions

" }, + "VPCOptions":{ + "shape":"VPCDerivedInfo", + "documentation":"

The VPCOptions for the specified domain. For more information, see VPC Endpoints for Amazon Elasticsearch Service Domains.

" + }, "AdvancedOptions":{ "shape":"AdvancedOptions", "documentation":"

Specifies the status of the AdvancedOptions

" + }, + "LogPublishingOptions":{ + "shape":"LogPublishingOptions", + "documentation":"

Log publishing options for the given domain.

" } }, "documentation":"

The current status of an Elasticsearch domain.

" @@ -777,6 +828,11 @@ "documentation":"

Status of the Elasticsearch version options for the specified Elasticsearch domain.

" }, "ElasticsearchVersionString":{"type":"string"}, + "EndpointsMap":{ + "type":"map", + "key":{"shape":"String"}, + "value":{"shape":"ServiceUrl"} + }, "ErrorMessage":{"type":"string"}, "InstanceCountLimits":{ "type":"structure", @@ -949,6 +1005,44 @@ }, "documentation":"

The result of a ListTags operation. Contains tags for all requested Elasticsearch domains.

" }, + "LogPublishingOption":{ + "type":"structure", + "members":{ + "CloudWatchLogsLogGroupArn":{"shape":"CloudWatchLogsLogGroupArn"}, + "Enabled":{ + "shape":"Boolean", + "documentation":"

Specifies whether given log publishing option is enabled or not.

" + } + }, + "documentation":"

Log Publishing option that is set for given domain.
Attributes and their details:

" + }, + "LogPublishingOptions":{ + "type":"map", + "key":{"shape":"LogType"}, + "value":{"shape":"LogPublishingOption"} + }, + "LogPublishingOptionsStatus":{ + "type":"structure", + "members":{ + "Options":{ + "shape":"LogPublishingOptions", + "documentation":"

The log publishing options configured for the Elasticsearch domain.

" + }, + "Status":{ + "shape":"OptionStatus", + "documentation":"

The status of the log publishing options for the Elasticsearch domain. See OptionStatus for the status information that's included.

" + } + }, + "documentation":"

The configured log publishing options for the domain and their current status.

" + }, + "LogType":{ + "type":"string", + "documentation":"

Type of Log File, it can be one of the following:

", + "enum":[ + "INDEX_SLOW_LOGS", + "SEARCH_SLOW_LOGS" + ] + }, "MaxResults":{ "type":"integer", "documentation":"

Set this value to limit the number of results returned.

", @@ -1184,6 +1278,10 @@ "shape":"SnapshotOptions", "documentation":"

Option to set the time, in UTC format, for the daily automated snapshot. Default value is 0 hours.

" }, + "VPCOptions":{ + "shape":"VPCOptions", + "documentation":"

Options to specify the subnets and security groups for VPC endpoint. For more information, see Creating a VPC in VPC Endpoints for Amazon Elasticsearch Service Domains

" + }, "AdvancedOptions":{ "shape":"AdvancedOptions", "documentation":"

Modifies the advanced option to allow references to indices in an HTTP request body. Must be false when configuring access to individual sub-resources. By default, the value is true. See Configuration Advanced Options for more information.

" @@ -1191,6 +1289,10 @@ "AccessPolicies":{ "shape":"PolicyDocument", "documentation":"

IAM access policy as a JSON-formatted string.

" + }, + "LogPublishingOptions":{ + "shape":"LogPublishingOptions", + "documentation":"

Map of LogType and LogPublishingOption, each containing options to publish a given type of Elasticsearch log.

" } }, "documentation":"

Container for the parameters to the UpdateElasticsearchDomain operation. Specifies the type and number of instances in the domain cluster.

" @@ -1207,6 +1309,60 @@ "documentation":"

The result of an UpdateElasticsearchDomain request. Contains the status of the Elasticsearch domain being updated.

" }, "UpdateTimestamp":{"type":"timestamp"}, + "VPCDerivedInfo":{ + "type":"structure", + "members":{ + "VPCId":{ + "shape":"String", + "documentation":"

The VPC Id for the Elasticsearch domain. Exists only if the domain was created with VPCOptions.

" + }, + "SubnetIds":{ + "shape":"StringList", + "documentation":"

Specifies the subnets for VPC endpoint.

" + }, + "AvailabilityZones":{ + "shape":"StringList", + "documentation":"

The availability zones for the Elasticsearch domain. Exists only if the domain was created with VPCOptions.

" + }, + "SecurityGroupIds":{ + "shape":"StringList", + "documentation":"

Specifies the security groups for VPC endpoint.

" + } + }, + "documentation":"

Options to specify the subnets and security groups for VPC endpoint. For more information, see VPC Endpoints for Amazon Elasticsearch Service Domains.

" + }, + "VPCDerivedInfoStatus":{ + "type":"structure", + "required":[ + "Options", + "Status" + ], + "members":{ + "Options":{ + "shape":"VPCDerivedInfo", + "documentation":"

Specifies the VPC options for the specified Elasticsearch domain.

" + }, + "Status":{ + "shape":"OptionStatus", + "documentation":"

Specifies the status of the VPC options for the specified Elasticsearch domain.

" + } + }, + "documentation":"

Status of the VPC options for the specified Elasticsearch domain.

" + }, + "VPCOptions":{ + "type":"structure", + "members":{ + "SubnetIds":{ + "shape":"StringList", + "documentation":"

Specifies the subnets for VPC endpoint.

" + }, + "SecurityGroupIds":{ + "shape":"StringList", + "documentation":"

Specifies the security groups for VPC endpoint.

" + } + }, + "documentation":"

Options to specify the subnets and security groups for VPC endpoint. For more information, see VPC Endpoints for Amazon Elasticsearch Service Domains.

" + }, "ValidationException":{ "type":"structure", "members":{ diff --git a/services/emr/src/main/resources/codegen-resources/service-2.json b/services/emr/src/main/resources/codegen-resources/service-2.json index 9fce8e7544c8..ac8eef9c0fc3 100644 --- a/services/emr/src/main/resources/codegen-resources/service-2.json +++ b/services/emr/src/main/resources/codegen-resources/service-2.json @@ -233,7 +233,7 @@ {"shape":"InternalServerException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Provides information about the cluster instances that Amazon EMR provisions on behalf of a user when it creates the cluster. For example, this operation indicates when the EC2 instances reach the Ready state, when instances become available to Amazon EMR to use for jobs, and the IP addresses for cluster instances, etc.

" + "documentation":"

Provides information for all active EC2 instances and EC2 instances terminated in the last 30 days, up to a maximum of 2,000. EC2 instances in any of the following states are considered active: AWAITING_FULFILLMENT, PROVISIONING, BOOTSTRAPPING, RUNNING.

" }, "ListSecurityConfigurations":{ "name":"ListSecurityConfigurations", @@ -524,7 +524,7 @@ "documentation":"

This option is for advanced users only. This is meta information about third-party applications that third-party vendors use for testing purposes.

" } }, - "documentation":"

An application is any Amazon or third-party software that you can add to the cluster. This structure contains a list of strings that indicates the software to use with the cluster and accepts a user argument list. Amazon EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action argument. For more information, see Using the MapR Distribution for Hadoop. Currently supported values are:

In Amazon EMR releases 4.0 and greater, the only accepted parameter is the application name. To pass arguments to applications, you supply a configuration for each application.

" + "documentation":"

An application is any Amazon or third-party software that you can add to the cluster. This structure contains a list of strings that indicates the software to use with the cluster and accepts a user argument list. Amazon EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action argument. For more information, see Using the MapR Distribution for Hadoop. Currently supported values are:

In Amazon EMR releases 4.x and later, the only accepted parameter is the application name. To pass arguments to applications, you supply a configuration for each application.

" }, "ApplicationList":{ "type":"list", @@ -789,7 +789,7 @@ }, "ReleaseLabel":{ "shape":"String", - "documentation":"

The release label for the Amazon EMR release. For Amazon EMR 3.x and 2.x AMIs, use amiVersion instead instead of ReleaseLabel.

" + "documentation":"

The release label for the Amazon EMR release.

" }, "AutoTerminate":{ "shape":"Boolean", @@ -825,7 +825,7 @@ }, "Configurations":{ "shape":"ConfigurationList", - "documentation":"

Amazon EMR releases 4.x or later.

The list of Configurations supplied to the EMR cluster.

" + "documentation":"

Applies only to Amazon EMR releases 4.x and later. The list of Configurations supplied to the EMR cluster.

" }, "SecurityConfiguration":{ "shape":"XmlString", @@ -838,6 +838,18 @@ "ScaleDownBehavior":{ "shape":"ScaleDownBehavior", "documentation":"

The way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized. TERMINATE_AT_INSTANCE_HOUR indicates that Amazon EMR terminates nodes at the instance-hour boundary, regardless of when the request to terminate the instance was submitted. This option is only available with Amazon EMR 5.1.0 and later and is the default for clusters created using that version. TERMINATE_AT_TASK_COMPLETION indicates that Amazon EMR blacklists and drains tasks from nodes before terminating the Amazon EC2 instances, regardless of the instance-hour boundary. With either behavior, Amazon EMR removes the least active nodes first and blocks instance termination if it could lead to HDFS corruption. TERMINATE_AT_TASK_COMPLETION is available only in Amazon EMR version 4.1.0 and later, and is the default for versions of Amazon EMR earlier than 5.1.0.

" + }, + "CustomAmiId":{ + "shape":"XmlStringMaxLen256", + "documentation":"

Available only in Amazon EMR version 5.7.0 and later. The ID of a custom Amazon EBS-backed Linux AMI if the cluster uses a custom AMI.

" + }, + "EbsRootVolumeSize":{ + "shape":"Integer", + "documentation":"

The size, in GiB, of the EBS root device volume of the Linux AMI that is used for each EC2 instance. Available in Amazon EMR version 4.x and later.

" + }, + "RepoUpgradeOnBoot":{ + "shape":"RepoUpgradeOnBoot", + "documentation":"

Applies only when CustomAmiID is used. Specifies the type of updates that are applied from the Amazon Linux AMI package repositories when an instance boots using the AMI.

" } }, "documentation":"

The detailed description of the cluster.

" @@ -875,6 +887,7 @@ "INTERNAL_ERROR", "VALIDATION_ERROR", "INSTANCE_FAILURE", + "INSTANCE_FLEET_TIMEOUT", "BOOTSTRAP_FAILURE", "USER_REQUEST", "STEP_FAILURE", @@ -1248,7 +1261,7 @@ }, "RequestedEc2SubnetIds":{ "shape":"XmlStringMaxLen256List", - "documentation":"

Applies to clusters configured with the instance fleets option. Specifies the unique identifier of one or more Amazon EC2 subnets in which to launch EC2 cluster instances. Amazon EMR chooses the EC2 subnet with the best performance and cost characteristics from among the list of RequestedEc2SubnetIds and launches all cluster instances within that subnet. If this value is not specified, and the account supports EC2-Classic networks, the cluster launches instances in the EC2-Classic network and uses Requested

" + "documentation":"

Applies to clusters configured with the instance fleets option. Specifies the unique identifier of one or more Amazon EC2 subnets in which to launch EC2 cluster instances. Subnets must exist within the same VPC. Amazon EMR chooses the EC2 subnet with the best fit from among the list of RequestedEc2SubnetIds, and then launches all cluster instances within that Subnet. If this value is not specified, and the account and region support EC2-Classic networks, the cluster launches instances in the EC2-Classic network and uses RequestedEc2AvailabilityZones instead of this setting. If EC2-Classic is not supported, and no Subnet is specified, Amazon EMR chooses the subnet for you. RequestedEc2SubnetIDs and RequestedEc2AvailabilityZones cannot be specified together.

" }, "Ec2AvailabilityZone":{ "shape":"String", @@ -1256,7 +1269,7 @@ }, "RequestedEc2AvailabilityZones":{ "shape":"XmlStringMaxLen256List", - "documentation":"

Applies to clusters configured with the The list of availability zones to choose from. The service will choose the availability zone with the best mix of available capacity and lowest cost to launch the cluster. If you do not specify this value, the cluster is launched in any availability zone that the customer account has access to.

" + "documentation":"

Applies to clusters configured with the instance fleets option. Specifies one or more Availability Zones in which to launch EC2 cluster instances when the EC2-Classic network configuration is supported. Amazon EMR chooses the Availability Zone with the best fit from among the list of RequestedEc2AvailabilityZones, and then launches all cluster instances within that Availability Zone. If you do not specify this value, Amazon EMR chooses the Availability Zone for you. RequestedEc2SubnetIDs and RequestedEc2AvailabilityZones cannot be specified together.

" }, "IamInstanceProfile":{ "shape":"String", @@ -2041,7 +2054,7 @@ }, "WeightedCapacity":{ "shape":"WholeNumber", - "documentation":"

The number of units that a provisioned instance of this type provides toward fulfilling the target capacities defined in InstanceFleetConfig. This value is 1 for a master instance fleet, and must be greater than 0 for core and task instance fleets.

" + "documentation":"

The number of units that a provisioned instance of this type provides toward fulfilling the target capacities defined in InstanceFleetConfig. This value is 1 for a master instance fleet, and must be 1 or greater for core and task instance fleets. Defaults to 1 if not specified.

" }, "BidPrice":{ "shape":"XmlStringMaxLen256", @@ -2049,7 +2062,7 @@ }, "BidPriceAsPercentageOfOnDemandPrice":{ "shape":"NonNegativeDouble", - "documentation":"

The bid price, as a percentage of On-Demand price, for each EC2 Spot instance as defined by InstanceType. Expressed as a number between 0 and 1000 (for example, 20 specifies 20%). If neither BidPrice nor BidPriceAsPercentageOfOnDemandPrice is provided, BidPriceAsPercentageOfOnDemandPrice defaults to 100%.

" + "documentation":"

The bid price, as a percentage of On-Demand price, for each EC2 Spot instance as defined by InstanceType. Expressed as a number (for example, 20 specifies 20%). If neither BidPrice nor BidPriceAsPercentageOfOnDemandPrice is provided, BidPriceAsPercentageOfOnDemandPrice defaults to 100%.

" }, "EbsConfiguration":{ "shape":"EbsConfiguration", @@ -2162,7 +2175,7 @@ }, "AmiVersion":{ "shape":"XmlStringMaxLen256", - "documentation":"

The version of the AMI used to initialize Amazon EC2 instances in the job flow. For a list of AMI versions currently supported by Amazon EMR, see AMI Versions Supported in EMR in the Amazon EMR Developer Guide.

" + "documentation":"

Used only for version 2.x and 3.x of Amazon EMR. The version of the AMI used to initialize Amazon EC2 instances in the job flow. For a list of AMI versions supported by Amazon EMR, see AMI Versions Supported in EMR in the Amazon EMR Developer Guide.

" }, "ExecutionStatusDetail":{ "shape":"JobFlowExecutionStatusDetail", @@ -2811,6 +2824,13 @@ }, "documentation":"

This output indicates the result of removing tags from a resource.

" }, + "RepoUpgradeOnBoot":{ + "type":"string", + "enum":[ + "SECURITY", + "NONE" + ] + }, "ResourceId":{"type":"string"}, "RunJobFlowInput":{ "type":"structure", @@ -2833,11 +2853,11 @@ }, "AmiVersion":{ "shape":"XmlStringMaxLen256", - "documentation":"

For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and greater, use ReleaseLabel.

The version of the Amazon Machine Image (AMI) to use when launching Amazon EC2 instances in the job flow. The following values are valid:

If the AMI supports multiple versions of Hadoop (for example, AMI 1.0 supports both Hadoop 0.18 and 0.20) you can use the JobFlowInstancesConfig HadoopVersion parameter to modify the version of Hadoop from the defaults shown above.

For details about the AMI versions currently supported by Amazon Elastic MapReduce, see AMI Versions Supported in Elastic MapReduce in the Amazon Elastic MapReduce Developer Guide.

Previously, the EMR AMI version API parameter options allowed you to use latest for the latest AMI version rather than specify a numerical value. Some regions no longer support this deprecated option as they only have a newer release label version of EMR, which requires you to specify an EMR release label release (EMR 4.x or later).

" + "documentation":"

For Amazon EMR AMI versions 3.x and 2.x. For Amazon EMR releases 4.0 and later, the Linux AMI is determined by the ReleaseLabel specified or by CustomAmiID. The version of the Amazon Machine Image (AMI) to use when launching Amazon EC2 instances in the job flow. For details about the AMI versions currently supported in EMR version 3.x and 2.x, see AMI Versions Supported in EMR in the Amazon EMR Developer Guide.

If the AMI supports multiple versions of Hadoop (for example, AMI 1.0 supports both Hadoop 0.18 and 0.20), you can use the JobFlowInstancesConfig HadoopVersion parameter to modify the version of Hadoop from the defaults shown above.

Previously, the EMR AMI version API parameter options allowed you to use latest for the latest AMI version rather than specify a numerical value. Some regions no longer support this deprecated option as they only have a newer release label version of EMR, which requires you to specify an EMR release label release (EMR 4.x or later).

" }, "ReleaseLabel":{ "shape":"XmlStringMaxLen256", - "documentation":"

Amazon EMR releases 4.x or later.

The release label for the Amazon EMR release. For Amazon EMR 3.x and 2.x AMIs, use amiVersion instead instead of ReleaseLabel.

" + "documentation":"

The release label for the Amazon EMR release. For Amazon EMR 3.x and 2.x AMIs, use AmiVersion instead.

" }, "Instances":{ "shape":"JobFlowInstancesConfig", @@ -2853,19 +2873,19 @@ }, "SupportedProducts":{ "shape":"SupportedProductsList", - "documentation":"

For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and greater, use Applications.

A list of strings that indicates third-party software to use. For more information, see Use Third Party Applications with Amazon EMR. Currently supported values are:

" + "documentation":"

For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and later, use Applications.

A list of strings that indicates third-party software to use. For more information, see Use Third Party Applications with Amazon EMR. Currently supported values are:

" }, "NewSupportedProducts":{ "shape":"NewSupportedProductsList", - "documentation":"

For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and greater, use Applications.

A list of strings that indicates third-party software to use with the job flow that accepts a user argument list. EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action arguments. For more information, see \"Launch a Job Flow on the MapR Distribution for Hadoop\" in the Amazon EMR Developer Guide. Supported values are:

" + "documentation":"

For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and later, use Applications.

A list of strings that indicates third-party software to use with the job flow that accepts a user argument list. EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action arguments. For more information, see \"Launch a Job Flow on the MapR Distribution for Hadoop\" in the Amazon EMR Developer Guide. Supported values are:

" }, "Applications":{ "shape":"ApplicationList", - "documentation":"

Amazon EMR releases 4.x or later.

A list of applications for the cluster. Valid values are: \"Hadoop\", \"Hive\", \"Mahout\", \"Pig\", and \"Spark.\" They are case insensitive.

" + "documentation":"

For Amazon EMR releases 4.0 and later. A list of applications for the cluster. Valid values are: \"Hadoop\", \"Hive\", \"Mahout\", \"Pig\", and \"Spark.\" They are case insensitive.

" }, "Configurations":{ "shape":"ConfigurationList", - "documentation":"

Amazon EMR releases 4.x or later.

The list of configurations supplied for the EMR cluster you are creating.

" + "documentation":"

For Amazon EMR releases 4.0 and later. The list of configurations supplied for the EMR cluster you are creating.

" }, "VisibleToAllUsers":{ "shape":"Boolean", @@ -2894,6 +2914,18 @@ "ScaleDownBehavior":{ "shape":"ScaleDownBehavior", "documentation":"

Specifies the way that individual Amazon EC2 instances terminate when an automatic scale-in activity occurs or an instance group is resized. TERMINATE_AT_INSTANCE_HOUR indicates that Amazon EMR terminates nodes at the instance-hour boundary, regardless of when the request to terminate the instance was submitted. This option is only available with Amazon EMR 5.1.0 and later and is the default for clusters created using that version. TERMINATE_AT_TASK_COMPLETION indicates that Amazon EMR blacklists and drains tasks from nodes before terminating the Amazon EC2 instances, regardless of the instance-hour boundary. With either behavior, Amazon EMR removes the least active nodes first and blocks instance termination if it could lead to HDFS corruption. TERMINATE_AT_TASK_COMPLETION available only in Amazon EMR version 4.1.0 and later, and is the default for versions of Amazon EMR earlier than 5.1.0.

" + }, + "CustomAmiId":{ + "shape":"XmlStringMaxLen256", + "documentation":"

Available only in Amazon EMR version 5.7.0 and later. The ID of a custom Amazon EBS-backed Linux AMI. If specified, Amazon EMR uses this AMI when it launches cluster EC2 instances. For more information about custom AMIs in Amazon EMR, see Using a Custom AMI in the Amazon EMR Management Guide. If omitted, the cluster uses the base Linux AMI for the ReleaseLabel specified. For Amazon EMR versions 2.x and 3.x, use AmiVersion instead.

For information about creating a custom AMI, see Creating an Amazon EBS-Backed Linux AMI in the Amazon Elastic Compute Cloud User Guide for Linux Instances. For information about finding an AMI ID, see Finding a Linux AMI.

" + }, + "EbsRootVolumeSize":{ + "shape":"Integer", + "documentation":"

The size, in GiB, of the EBS root device volume of the Linux AMI that is used for each EC2 instance. Available in Amazon EMR version 4.x and later.

" + }, + "RepoUpgradeOnBoot":{ + "shape":"RepoUpgradeOnBoot", + "documentation":"

Applies only when CustomAmiID is used. Specifies which updates from the Amazon Linux AMI package repositories to apply automatically when the instance boots using the AMI. If omitted, the default is SECURITY, which indicates that only security updates are applied. If NONE is specified, no updates are applied, and all updates must be applied manually.

" } }, "documentation":"

Input to the RunJobFlow operation.

" @@ -3109,7 +3141,7 @@ }, "TimeoutAction":{ "shape":"SpotProvisioningTimeoutAction", - "documentation":"

The action to take when TargetSpotCapacity has not been fulfilled when the TimeoutDurationMinutes has expired. Spot instances are not uprovisioned within the Spot provisioining timeout. Valid values are TERMINATE_CLUSTER and SWITCH_TO_ON_DEMAND to fulfill the remaining capacity.

" + "documentation":"

The action to take when TargetSpotCapacity has not been fulfilled when the TimeoutDurationMinutes has expired. Spot instances are not uprovisioned within the Spot provisioining timeout. Valid values are TERMINATE_CLUSTER and SWITCH_TO_ON_DEMAND. SWITCH_TO_ON_DEMAND specifies that if no Spot instances are available, On-Demand Instances should be provisioned to fulfill any remaining Spot capacity.

" }, "BlockDurationMinutes":{ "shape":"WholeNumber", diff --git a/services/events/src/main/resources/codegen-resources/service-2.json b/services/events/src/main/resources/codegen-resources/service-2.json index 3c61d4924b66..04286271a65e 100644 --- a/services/events/src/main/resources/codegen-resources/service-2.json +++ b/services/events/src/main/resources/codegen-resources/service-2.json @@ -24,6 +24,20 @@ ], "documentation":"

Deletes the specified rule.

You must remove all targets from a rule using RemoveTargets before you can delete the rule.

When you delete a rule, incoming events might continue to match to the deleted rule. Please allow a short period of time for changes to take effect.

" }, + "DescribeEventBus":{ + "name":"DescribeEventBus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeEventBusRequest"}, + "output":{"shape":"DescribeEventBusResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalException"} + ], + "documentation":"

Displays the external AWS accounts that are permitted to write events to your account using your account's event bus, and the associated policy. To enable your account to receive events from other accounts, use PutPermission.

" + }, "DescribeRule":{ "name":"DescribeRule", "http":{ @@ -119,6 +133,21 @@ ], "documentation":"

Sends custom events to Amazon CloudWatch Events so that they can be matched to rules.

" }, + "PutPermission":{ + "name":"PutPermission", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutPermissionRequest"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"PolicyLengthExceededException"}, + {"shape":"InternalException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Running PutPermission permits the specified AWS account to put events to your account's default event bus. CloudWatch Events rules in your account are triggered by these events arriving to your default event bus.

For another account to send events to your account, that external account must have a CloudWatch Events rule with your account's default event bus as a target.

To enable multiple AWS accounts to put events to your default event bus, run PutPermission once for each of these accounts.

The permission policy on the default event bus cannot exceed 10KB in size.

" + }, "PutRule":{ "name":"PutRule", "http":{ @@ -149,7 +178,21 @@ {"shape":"LimitExceededException"}, {"shape":"InternalException"} ], - "documentation":"

Adds the specified targets to the specified rule, or updates the targets if they are already associated with the rule.

Targets are the resources that are invoked when a rule is triggered. Example targets include EC2 instances, AWS Lambda functions, Amazon Kinesis streams, Amazon ECS tasks, AWS Step Functions state machines, and built-in targets. Note that creating rules with built-in targets is supported only in the AWS Management Console.

For some target types, PutTargets provides target-specific parameters. If the target is an Amazon Kinesis stream, you can optionally specify which shard the event goes to by using the KinesisParameters argument. To invoke a command on multiple EC2 instances with one rule, you can use the RunCommandParameters field.

To be able to make API calls against the resources that you own, Amazon CloudWatch Events needs the appropriate permissions. For AWS Lambda and Amazon SNS resources, CloudWatch Events relies on resource-based policies. For EC2 instances, Amazon Kinesis streams, and AWS Step Functions state machines, CloudWatch Events relies on IAM roles that you specify in the RoleARN argument in PutTarget. For more information, see Authentication and Access Control in the Amazon CloudWatch Events User Guide.

Input, InputPath and InputTransformer are mutually exclusive and optional parameters of a target. When a rule is triggered due to a matched event:

When you specify Input, InputPath, or InputTransformer, you must use JSON dot notation, not bracket notation.

When you add targets to a rule and the associated rule triggers soon after, new or updated targets might not be immediately invoked. Please allow a short period of time for changes to take effect.

This action can partially fail if too many requests are made at the same time. If that happens, FailedEntryCount is non-zero in the response and each entry in FailedEntries provides the ID of the failed target and the error code.

" + "documentation":"

Adds the specified targets to the specified rule, or updates the targets if they are already associated with the rule.

Targets are the resources that are invoked when a rule is triggered.

You can configure the following as targets for CloudWatch Events:

Note that creating rules with built-in targets is supported only in the AWS Management Console.

For some target types, PutTargets provides target-specific parameters. If the target is an Amazon Kinesis stream, you can optionally specify which shard the event goes to by using the KinesisParameters argument. To invoke a command on multiple EC2 instances with one rule, you can use the RunCommandParameters field.

To be able to make API calls against the resources that you own, Amazon CloudWatch Events needs the appropriate permissions. For AWS Lambda and Amazon SNS resources, CloudWatch Events relies on resource-based policies. For EC2 instances, Amazon Kinesis streams, and AWS Step Functions state machines, CloudWatch Events relies on IAM roles that you specify in the RoleARN argument in PutTargets. For more information, see Authentication and Access Control in the Amazon CloudWatch Events User Guide.

If another AWS account is in the same region and has granted you permission (using PutPermission), you can send events to that account by setting that account's event bus as a target of the rules in your account. To send the matched events to the other account, specify that account's event bus as the Arn when you run PutTargets. If your account sends events to another account, your account is charged for each sent event. Each event sent to antoher account is charged as a custom event. The account receiving the event is not charged. For more information on pricing, see Amazon CloudWatch Pricing.

For more information about enabling cross-account events, see PutPermission.

Input, InputPath and InputTransformer are mutually exclusive and optional parameters of a target. When a rule is triggered due to a matched event:

When you specify Input, InputPath, or InputTransformer, you must use JSON dot notation, not bracket notation.

When you add targets to a rule and the associated rule triggers soon after, new or updated targets might not be immediately invoked. Please allow a short period of time for changes to take effect.

This action can partially fail if too many requests are made at the same time. If that happens, FailedEntryCount is non-zero in the response and each entry in FailedEntries provides the ID of the failed target and the error code.

" + }, + "RemovePermission":{ + "name":"RemovePermission", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RemovePermissionRequest"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Revokes the permission of another AWS account to be able to put events to your default event bus. Specify the account to revoke by the StatementId value that you associated with the account when you granted it permission with PutPermission. You can find the StatementId by using DescribeEventBus.

" }, "RemoveTargets":{ "name":"RemoveTargets", @@ -182,6 +225,12 @@ } }, "shapes":{ + "Action":{ + "type":"string", + "max":64, + "min":1, + "pattern":"events:[a-zA-Z]+" + }, "Arn":{ "type":"string", "max":1600, @@ -205,6 +254,28 @@ } } }, + "DescribeEventBusRequest":{ + "type":"structure", + "members":{ + } + }, + "DescribeEventBusResponse":{ + "type":"structure", + "members":{ + "Name":{ + "shape":"String", + "documentation":"

The name of the event bus. Currently, this is always default.

" + }, + "Arn":{ + "shape":"String", + "documentation":"

The Amazon Resource Name (ARN) of the account permitted to write events to the current account.

" + }, + "Policy":{ + "shape":"String", + "documentation":"

The policy that enables the external account to send events to your account.

" + } + } + }, "DescribeRuleRequest":{ "type":"structure", "required":["Name"], @@ -457,6 +528,19 @@ "max":2048, "min":1 }, + "PolicyLengthExceededException":{ + "type":"structure", + "members":{ + }, + "documentation":"

The event bus policy is too long. For more information, see the limits.

", + "exception":true + }, + "Principal":{ + "type":"string", + "max":12, + "min":1, + "pattern":"(\\d{12}|\\*)" + }, "PutEventsRequest":{ "type":"structure", "required":["Entries"], @@ -534,6 +618,28 @@ "type":"list", "member":{"shape":"PutEventsResultEntry"} }, + "PutPermissionRequest":{ + "type":"structure", + "required":[ + "Action", + "Principal", + "StatementId" + ], + "members":{ + "Action":{ + "shape":"Action", + "documentation":"

The action that you are enabling the other account to perform. Currently, this must be events:PutEvents.

" + }, + "Principal":{ + "shape":"Principal", + "documentation":"

The 12-digit AWS account ID that you are permitting to put events to your default event bus. Specify \"*\" to permit any account to put events to your default event bus.

If you specify \"*\", avoid creating rules that may match undesirable events. To create more secure rules, make sure that the event pattern for each rule contains an account field with a specific account ID from which to receive events. Rules with an account field do not match any events sent from other accounts.

" + }, + "StatementId":{ + "shape":"StatementId", + "documentation":"

An identifier string for the external account that you are granting permissions to. If you later want to revoke the permission for this external account, specify this StatementId when you run RemovePermission.

" + } + } + }, "PutRuleRequest":{ "type":"structure", "required":["Name"], @@ -544,7 +650,7 @@ }, "ScheduleExpression":{ "shape":"ScheduleExpression", - "documentation":"

The scheduling expression. For example, \"cron(0 20 * * ? *)\", \"rate(5 minutes)\".

" + "documentation":"

The scheduling expression. For example, \"cron(0 20 * * ? *)\" or \"rate(5 minutes)\".

" }, "EventPattern":{ "shape":"EventPattern", @@ -625,6 +731,16 @@ "type":"list", "member":{"shape":"PutTargetsResultEntry"} }, + "RemovePermissionRequest":{ + "type":"structure", + "required":["StatementId"], + "members":{ + "StatementId":{ + "shape":"StatementId", + "documentation":"

The statement ID corresponding to the account that is no longer allowed to put events to the default event bus.

" + } + } + }, "RemoveTargetsRequest":{ "type":"structure", "required":[ @@ -681,7 +797,7 @@ "type":"structure", "members":{ }, - "documentation":"

The rule does not exist.

", + "documentation":"

An entity that you specified does not exist.

", "exception":true }, "RoleArn":{ @@ -809,6 +925,12 @@ "type":"string", "max":256 }, + "StatementId":{ + "type":"string", + "max":64, + "min":1, + "pattern":"[a-zA-Z0-9-_]+" + }, "String":{"type":"string"}, "Target":{ "type":"structure", diff --git a/services/gamelift/src/main/resources/codegen-resources/service-2.json b/services/gamelift/src/main/resources/codegen-resources/service-2.json index 483cd5f4c3cb..78dfd40b66f8 100644 --- a/services/gamelift/src/main/resources/codegen-resources/service-2.json +++ b/services/gamelift/src/main/resources/codegen-resources/service-2.json @@ -11,6 +11,22 @@ "uid":"gamelift-2015-10-01" }, "operations":{ + "AcceptMatch":{ + "name":"AcceptMatch", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AcceptMatchInput"}, + "output":{"shape":"AcceptMatchOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Registers a player's acceptance or rejection of a proposed FlexMatch match. A matchmaking configuration may require player acceptance; if so, then matches built with that configuration cannot be completed unless all players accept the proposed match within a specified time limit.

When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE. This is a trigger for your game to get acceptance from all players in the ticket. Acceptances are only valid for tickets when they are in this status; all other acceptances result in an error.

To register acceptance, specify the ticket ID, a response, and one or more players. Once all players have registered acceptance, the matchmaking tickets advance to status PLACING, where a new game session is created for the match.

If any player rejects the match, or if acceptances are not received before a specified timeout, the proposed match is dropped. The matchmaking tickets are then handled in one of two ways: For tickets where all players accepted the match, the ticket status is returned to SEARCHING to find a new match. For tickets where one or more players failed to accept the match, the ticket status is set to FAILED, and processing is terminated. A new matchmaking request for these players can be submitted as needed.

Matchmaking-related operations include:

" + }, "CreateAlias":{ "name":"CreateAlias", "http":{ @@ -26,7 +42,7 @@ {"shape":"InternalServiceException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates an alias and sets a target fleet. A fleet alias can be used in place of a fleet ID, such as when calling CreateGameSession from a game client or game service or adding destinations to a game session queue. By changing an alias's target fleet, you can switch your players to the new fleet without changing any other component. In production, this feature is particularly useful to redirect your player base seamlessly to the latest game server update.

Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. Use a simple alias to point to an active fleet. Use a terminal alias to display a message to incoming traffic instead of routing players to an active fleet. This option is useful when a game server is no longer supported but you want to provide better messaging than a standard 404 error.

To create a fleet alias, specify an alias name, routing strategy, and optional description. If successful, a new alias record is returned, including an alias ID, which you can reference when creating a game session. To reassign the alias to another fleet ID, call UpdateAlias.

" + "documentation":"

Creates an alias for a fleet. In most situations, you can use an alias ID in place of a fleet ID. By using a fleet alias instead of a specific fleet ID, you can switch gameplay and players to a new fleet without changing your game client or other game components. For example, for games in production, using an alias allows you to seamlessly redirect your player base to a new game server update.

Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. A simple alias points to an active fleet. A terminal alias is used to display messaging or link to a URL instead of routing players to an active fleet. For example, you might use a terminal alias when a game version is no longer supported and you want to direct players to an upgrade site.

To create a fleet alias, specify an alias name, routing strategy, and optional description. Each simple alias can point to only one fleet, but a fleet can have multiple aliases. If successful, a new alias record is returned, including an alias ID, which you can reference when creating a game session. You can reassign an alias to another fleet by calling UpdateAlias.

Alias-related operations include:

" }, "CreateBuild":{ "name":"CreateBuild", @@ -42,7 +58,7 @@ {"shape":"ConflictException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Creates a new Amazon GameLift build from a set of game server binary files stored in an Amazon Simple Storage Service (Amazon S3) location. When using this API call, you must create a .zip file containing all of the build files and store it in an Amazon S3 bucket under your AWS account. For help on packaging your build files and creating a build, see Uploading Your Game to Amazon GameLift.

Use this API action ONLY if you are storing your game build files in an Amazon S3 bucket in your AWS account. To create a build using files stored in a directory, use the CLI command upload-build , which uploads the build files from a file location you specify and creates a build.

To create a new build using CreateBuild, identify the storage location and operating system of your game build. You also have the option of specifying a build name and version. If successful, this action creates a new build record with an unique build ID and in INITIALIZED status. Use the API call DescribeBuild to check the status of your build. A build must be in READY status before it can be used to create fleets to host your game.

" + "documentation":"

Creates a new Amazon GameLift build from a set of game server binary files stored in an Amazon Simple Storage Service (Amazon S3) location. To use this API call, create a .zip file containing all of the files for the build and store it in an Amazon S3 bucket under your AWS account. For help on packaging your build files and creating a build, see Uploading Your Game to Amazon GameLift.

Use this API action ONLY if you are storing your game build files in an Amazon S3 bucket. To create a build using files stored locally, use the CLI command upload-build , which uploads the build files from a file location you specify.

To create a new build using CreateBuild, identify the storage location and operating system of your game build. You also have the option of specifying a build name and version. If successful, this action creates a new build record with an unique build ID and in INITIALIZED status. Use the API call DescribeBuild to check the status of your build. A build must be in READY status before it can be used to create fleets to host your game.

Build-related operations include:

" }, "CreateFleet":{ "name":"CreateFleet", @@ -60,7 +76,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Creates a new fleet to run your game servers. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can run multiple server processes to host game sessions. You configure a fleet to create instances with certain hardware specifications (see Amazon EC2 Instance Types for more information), and deploy a specified game build to each instance. A newly created fleet passes through several statuses; once it reaches the ACTIVE status, it can begin hosting game sessions.

To create a new fleet, you must specify the following: (1) fleet name, (2) build ID of an uploaded game build, (3) an EC2 instance type, and (4) a runtime configuration that describes which server processes to run on each instance in the fleet. (Although the runtime configuration is not a required parameter, the fleet cannot be successfully created without it.) You can also configure the new fleet with the following settings: fleet description, access permissions for inbound traffic, fleet-wide game session protection, and resource creation limit. If you use Amazon CloudWatch for metrics, you can add the new fleet to a metric group, which allows you to view aggregated metrics for a set of fleets. Once you specify a metric group, the new fleet's metrics are included in the metric group's data.

If the CreateFleet call is successful, Amazon GameLift performs the following tasks:

After a fleet is created, use the following actions to change fleet properties and configuration:

" + "documentation":"

Creates a new fleet to run your game servers. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can run multiple server processes to host game sessions. You configure a fleet to create instances with certain hardware specifications (see Amazon EC2 Instance Types for more information), and deploy a specified game build to each instance. A newly created fleet passes through several statuses; once it reaches the ACTIVE status, it can begin hosting game sessions.

To create a new fleet, you must specify the following: (1) fleet name, (2) build ID of an uploaded game build, (3) an EC2 instance type, and (4) a run-time configuration that describes which server processes to run on each instance in the fleet. (Although the run-time configuration is not a required parameter, the fleet cannot be successfully activated without it.)

You can also configure the new fleet with the following settings:

If you use Amazon CloudWatch for metrics, you can add the new fleet to a metric group. This allows you to view aggregated metrics for a set of fleets. Once you specify a metric group, the new fleet's metrics are included in the metric group's data.

You have the option of creating a VPC peering connection with the new fleet. For more information, see VPC Peering with Amazon GameLift Fleets.

If the CreateFleet call is successful, Amazon GameLift performs the following tasks:

Fleet-related operations include:

" }, "CreateGameSession":{ "name":"CreateGameSession", @@ -82,7 +98,7 @@ {"shape":"LimitExceededException"}, {"shape":"IdempotentParameterMismatchException"} ], - "documentation":"

Creates a multiplayer game session for players. This action creates a game session record and assigns an available server process in the specified fleet to host the game session. A fleet must have an ACTIVE status before a game session can be created in it.

To create a game session, specify either fleet ID or alias ID and indicate a maximum number of players to allow in the game session. You can also provide a name and game-specific properties for this game session. If successful, a GameSession object is returned containing game session properties, including a game session ID with the custom string you provided.

Idempotency tokens. You can add a token that uniquely identifies game session requests. This is useful for ensuring that game session requests are idempotent. Multiple requests with the same idempotency token are processed only once; subsequent requests return the original result. All response values are the same with the exception of game session status, which may change.

Resource creation limits. If you are creating a game session on a fleet with a resource creation limit policy in force, then you must specify a creator ID. Without this ID, Amazon GameLift has no way to evaluate the policy for this new game session request.

By default, newly created game sessions allow new players to join. Use UpdateGameSession to change the game session's player session creation policy.

Available in Amazon GameLift Local.

" + "documentation":"

Creates a multiplayer game session for players. This action creates a game session record and assigns an available server process in the specified fleet to host the game session. A fleet must have an ACTIVE status before a game session can be created in it.

To create a game session, specify either fleet ID or alias ID and indicate a maximum number of players to allow in the game session. You can also provide a name and game-specific properties for this game session. If successful, a GameSession object is returned containing the game session properties and other settings you specified.

Idempotency tokens. You can add a token that uniquely identifies game session requests. This is useful for ensuring that game session requests are idempotent. Multiple requests with the same idempotency token are processed only once; subsequent requests return the original result. All response values are the same with the exception of game session status, which may change.

Resource creation limits. If you are creating a game session on a fleet with a resource creation limit policy in force, then you must specify a creator ID. Without this ID, Amazon GameLift has no way to evaluate the policy for this new game session request.

Player acceptance policy. By default, newly created game sessions are open to new players. You can restrict new player access by using UpdateGameSession to change the game session's player session creation policy.

Game session logs. Logs are retained for all active game sessions for 14 days. To access the logs, call GetGameSessionLogUrl to download the log files.

Available in Amazon GameLift Local.

Game-session-related operations include:

" }, "CreateGameSessionQueue":{ "name":"CreateGameSessionQueue", @@ -98,7 +114,39 @@ {"shape":"UnauthorizedException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.

Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.

Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.

To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.

" + "documentation":"

Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.

Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.

Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.

To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.

Queue-related operations include:

" + }, + "CreateMatchmakingConfiguration":{ + "name":"CreateMatchmakingConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateMatchmakingConfigurationInput"}, + "output":{"shape":"CreateMatchmakingConfigurationOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Defines a new matchmaking configuration for use with FlexMatch. A matchmaking configuration sets out guidelines for matching players and getting the matches into games. You can set up multiple matchmaking configurations to handle the scenarios needed for your game. Each matchmaking request (StartMatchmaking) specifies a configuration for the match and provides player attributes to support the configuration being used.

To create a matchmaking configuration, at a minimum you must specify the following: configuration name; a rule set that governs how to evaluate players and find acceptable matches; a game session queue to use when placing a new game session for the match; and the maximum time allowed for a matchmaking attempt.

Player acceptance -- In each configuration, you have the option to require that all players accept participation in a proposed match. To enable this feature, set AcceptanceRequired to true and specify a time limit for player acceptance. Players have the option to accept or reject a proposed match, and a match does not move ahead to game session placement unless all matched players accept.

Matchmaking status notification -- There are two ways to track the progress of matchmaking tickets: (1) polling ticket status with DescribeMatchmaking; or (2) receiving notifications with Amazon Simple Notification Service (SNS). To use notifications, you first need to set up an SNS topic to receive the notifications, and provide the topic ARN in the matchmaking configuration (see Setting up Notifications for Matchmaking). Since notifications promise only \"best effort\" delivery, we recommend calling DescribeMatchmaking if no notifications are received within 30 seconds.

Operations related to match configurations and rule sets include:

" + }, + "CreateMatchmakingRuleSet":{ + "name":"CreateMatchmakingRuleSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateMatchmakingRuleSetInput"}, + "output":{"shape":"CreateMatchmakingRuleSetOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Creates a new rule set for FlexMatch matchmaking. A rule set describes the type of match to create, such as the number and size of teams, and sets the parameters for acceptable player matches, such as minimum skill level or character type. Rule sets are used in matchmaking configurations, which define how matchmaking requests are handled. Each MatchmakingConfiguration uses one rule set; you can set up multiple rule sets to handle the scenarios that suit your game (such as for different game modes), and create a separate matchmaking configuration for each rule set. See additional information on rule set content in the MatchmakingRuleSet structure. For help creating rule sets, including useful examples, see the topic Adding FlexMatch to Your Game.

Once created, matchmaking rule sets cannot be changed or deleted, so we recommend checking the rule set syntax using ValidateMatchmakingRuleSetbefore creating the rule set.

To create a matchmaking rule set, provide the set of rules and a unique name. Rule sets must be defined in the same region as the matchmaking configuration they will be used with. Rule sets cannot be edited or deleted. If you need to change a rule set, create a new one with the necessary edits and then update matchmaking configurations to use the new rule set.

Operations related to match configurations and rule sets include:

" }, "CreatePlayerSession":{ "name":"CreatePlayerSession", @@ -117,7 +165,7 @@ {"shape":"InvalidRequestException"}, {"shape":"NotFoundException"} ], - "documentation":"

Adds a player to a game session and creates a player session record. Before a player can be added, a game session must have an ACTIVE status, have a creation policy of ALLOW_ALL, and have an open player slot. To add a group of players to a game session, use CreatePlayerSessions.

To create a player session, specify a game session ID, player ID, and optionally a string of player data. If successful, the player is added to the game session and a new PlayerSession object is returned. Player sessions cannot be updated.

Available in Amazon GameLift Local.

" + "documentation":"

Adds a player to a game session and creates a player session record. Before a player can be added, a game session must have an ACTIVE status, have a creation policy of ALLOW_ALL, and have an open player slot. To add a group of players to a game session, use CreatePlayerSessions.

To create a player session, specify a game session ID, player ID, and optionally a string of player data. If successful, the player is added to the game session and a new PlayerSession object is returned. Player sessions cannot be updated.

Available in Amazon GameLift Local.

Player-session-related operations include:

" }, "CreatePlayerSessions":{ "name":"CreatePlayerSessions", @@ -136,7 +184,39 @@ {"shape":"InvalidRequestException"}, {"shape":"NotFoundException"} ], - "documentation":"

Adds a group of players to a game session. This action is useful with a team matching feature. Before players can be added, a game session must have an ACTIVE status, have a creation policy of ALLOW_ALL, and have an open player slot. To add a single player to a game session, use CreatePlayerSession.

To create player sessions, specify a game session ID, a list of player IDs, and optionally a set of player data strings. If successful, the players are added to the game session and a set of new PlayerSession objects is returned. Player sessions cannot be updated.

Available in Amazon GameLift Local.

" + "documentation":"

Adds a group of players to a game session. This action is useful with a team matching feature. Before players can be added, a game session must have an ACTIVE status, have a creation policy of ALLOW_ALL, and have an open player slot. To add a single player to a game session, use CreatePlayerSession.

To create player sessions, specify a game session ID, a list of player IDs, and optionally a set of player data strings. If successful, the players are added to the game session and a set of new PlayerSession objects is returned. Player sessions cannot be updated.

Available in Amazon GameLift Local.

Player-session-related operations include:

" + }, + "CreateVpcPeeringAuthorization":{ + "name":"CreateVpcPeeringAuthorization", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateVpcPeeringAuthorizationInput"}, + "output":{"shape":"CreateVpcPeeringAuthorizationOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Requests authorization to create or delete a peer connection between the VPC for your Amazon GameLift fleet and a virtual private cloud (VPC) in your AWS account. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. Once you've received authorization, call CreateVpcPeeringConnection to establish the peering connection. For more information, see VPC Peering with Amazon GameLift Fleets.

You can peer with VPCs that are owned by any AWS account you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different regions.

To request authorization to create a connection, call this operation from the AWS account with the VPC that you want to peer to your Amazon GameLift fleet. For example, to enable your game servers to retrieve data from a DynamoDB table, use the account that manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC that you want to peer with, and (2) the ID of the AWS account that you use to manage Amazon GameLift. If successful, VPC peering is authorized for the specified VPC.

To request authorization to delete a connection, call this operation from the AWS account with the VPC that is peered with your Amazon GameLift fleet. Identify the following values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the AWS account that you use to manage Amazon GameLift.

The authorization remains valid for 24 hours unless it is canceled by a call to DeleteVpcPeeringAuthorization. You must create or delete the peering connection while the authorization is valid.

VPC peering connection operations include:

" + }, + "CreateVpcPeeringConnection":{ + "name":"CreateVpcPeeringConnection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateVpcPeeringConnectionInput"}, + "output":{"shape":"CreateVpcPeeringConnectionOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Establishes a VPC peering connection between a virtual private cloud (VPC) in an AWS account with the VPC for your Amazon GameLift fleet. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. You can peer with VPCs in any AWS account that you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different regions. For more information, see VPC Peering with Amazon GameLift Fleets.

Before calling this operation to establish the peering connection, you first need to call CreateVpcPeeringAuthorization and identify the VPC you want to peer with. Once the authorization for the specified VPC is issued, you have 24 hours to establish the connection. These two operations handle all tasks necessary to peer the two VPCs, including acceptance, updating routing tables, etc.

To establish the connection, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the following values: (1) The ID of the fleet you want to be enable a VPC peering connection for; (2) The AWS account with the VPC that you want to peer with; and (3) The ID of the VPC you want to peer with. This operation is asynchronous. If successful, a VpcPeeringConnection request is created. You can use continuous polling to track the request's status using DescribeVpcPeeringConnections, or by monitoring fleet events for success or failure using DescribeFleetEvents.

VPC peering connection operations include:

" }, "DeleteAlias":{ "name":"DeleteAlias", @@ -151,7 +231,7 @@ {"shape":"InvalidRequestException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Deletes a fleet alias. This action removes all record of the alias. Game clients attempting to access a server process using the deleted alias receive an error. To delete an alias, specify the alias ID to be deleted.

" + "documentation":"

Deletes an alias. This action removes all record of the alias. Game clients attempting to access a server process using the deleted alias receive an error. To delete an alias, specify the alias ID to be deleted.

Alias-related operations include:

" }, "DeleteBuild":{ "name":"DeleteBuild", @@ -166,7 +246,7 @@ {"shape":"InternalServiceException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Deletes a build. This action permanently deletes the build record and any uploaded build files.

To delete a build, specify its ID. Deleting a build does not affect the status of any active fleets using the build, but you can no longer create new fleets with the deleted build.

" + "documentation":"

Deletes a build. This action permanently deletes the build record and any uploaded build files.

To delete a build, specify its ID. Deleting a build does not affect the status of any active fleets using the build, but you can no longer create new fleets with the deleted build.

Build-related operations include:

" }, "DeleteFleet":{ "name":"DeleteFleet", @@ -182,7 +262,7 @@ {"shape":"UnauthorizedException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.

This action removes the fleet's resources and the fleet record. Once a fleet is deleted, you can no longer use that fleet.

" + "documentation":"

Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.

This action removes the fleet's resources and the fleet record. Once a fleet is deleted, you can no longer use that fleet.

Fleet-related operations include:

" }, "DeleteGameSessionQueue":{ "name":"DeleteGameSessionQueue", @@ -198,7 +278,23 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Deletes a game session queue. This action means that any StartGameSessionPlacement requests that reference this queue will fail. To delete a queue, specify the queue name.

" + "documentation":"

Deletes a game session queue. This action means that any StartGameSessionPlacement requests that reference this queue will fail. To delete a queue, specify the queue name.

Queue-related operations include:

" + }, + "DeleteMatchmakingConfiguration":{ + "name":"DeleteMatchmakingConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteMatchmakingConfigurationInput"}, + "output":{"shape":"DeleteMatchmakingConfigurationOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Permanently removes a FlexMatch matchmaking configuration. To delete, specify the configuration name. A matchmaking configuration cannot be deleted if it is being used in any active matchmaking tickets.

Operations related to match configurations and rule sets include:

" }, "DeleteScalingPolicy":{ "name":"DeleteScalingPolicy", @@ -213,7 +309,39 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"} ], - "documentation":"

Deletes a fleet scaling policy. This action means that the policy is no longer in force and removes all record of it. To delete a scaling policy, specify both the scaling policy name and the fleet ID it is associated with.

" + "documentation":"

Deletes a fleet scaling policy. This action means that the policy is no longer in force and removes all record of it. To delete a scaling policy, specify both the scaling policy name and the fleet ID it is associated with.

Fleet-related operations include:

" + }, + "DeleteVpcPeeringAuthorization":{ + "name":"DeleteVpcPeeringAuthorization", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteVpcPeeringAuthorizationInput"}, + "output":{"shape":"DeleteVpcPeeringAuthorizationOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Cancels a pending VPC peering authorization for the specified VPC. If the authorization has already been used to create a peering connection, call DeleteVpcPeeringConnection to remove the connection.

VPC peering connection operations include:

" + }, + "DeleteVpcPeeringConnection":{ + "name":"DeleteVpcPeeringConnection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteVpcPeeringConnectionInput"}, + "output":{"shape":"DeleteVpcPeeringConnectionOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Removes a VPC peering connection. To delete the connection, you must have a valid authorization for the VPC peering connection that you want to delete. You can check for an authorization by calling DescribeVpcPeeringAuthorizations or request a new one using CreateVpcPeeringAuthorization.

Once a valid authorization exists, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the connection to delete by the connection ID and fleet ID. If successful, the connection is removed.

VPC peering connection operations include:

" }, "DescribeAlias":{ "name":"DescribeAlias", @@ -229,7 +357,7 @@ {"shape":"NotFoundException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Retrieves properties for a fleet alias. This operation returns all alias metadata and settings. To get just the fleet ID an alias is currently pointing to, use ResolveAlias.

To get alias properties, specify the alias ID. If successful, an Alias object is returned.

" + "documentation":"

Retrieves properties for an alias. This operation returns all alias metadata and settings. To get an alias's target fleet ID only, use ResolveAlias.

To get alias properties, specify the alias ID. If successful, the requested alias record is returned.

Alias-related operations include:

" }, "DescribeBuild":{ "name":"DescribeBuild", @@ -245,7 +373,7 @@ {"shape":"NotFoundException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Retrieves properties for a build. To get a build record, specify a build ID. If successful, an object containing the build properties is returned.

" + "documentation":"

Retrieves properties for a build. To get a build record, specify a build ID. If successful, an object containing the build properties is returned.

Build-related operations include:

" }, "DescribeEC2InstanceLimits":{ "name":"DescribeEC2InstanceLimits", @@ -260,7 +388,7 @@ {"shape":"InternalServiceException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves the following information for the specified EC2 instance type:

Service limits vary depending on region. Available regions for Amazon GameLift can be found in the AWS Management Console for Amazon GameLift (see the drop-down list in the upper right corner).

" + "documentation":"

Retrieves the following information for the specified EC2 instance type:

Service limits vary depending on region. Available regions for Amazon GameLift can be found in the AWS Management Console for Amazon GameLift (see the drop-down list in the upper right corner).

Fleet-related operations include:

" }, "DescribeFleetAttributes":{ "name":"DescribeFleetAttributes", @@ -276,7 +404,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves fleet properties, including metadata, status, and configuration, for one or more fleets. You can request attributes for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

" + "documentation":"

Retrieves fleet properties, including metadata, status, and configuration, for one or more fleets. You can request attributes for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

Fleet-related operations include:

" }, "DescribeFleetCapacity":{ "name":"DescribeFleetCapacity", @@ -292,7 +420,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves the current status of fleet capacity for one or more fleets. This information includes the number of instances that have been requested for the fleet and the number currently active. You can request capacity for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

" + "documentation":"

Retrieves the current status of fleet capacity for one or more fleets. This information includes the number of instances that have been requested for the fleet and the number currently active. You can request capacity for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

Fleet-related operations include:

" }, "DescribeFleetEvents":{ "name":"DescribeFleetEvents", @@ -308,7 +436,7 @@ {"shape":"UnauthorizedException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.

" + "documentation":"

Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.

Fleet-related operations include:

" }, "DescribeFleetPortSettings":{ "name":"DescribeFleetPortSettings", @@ -324,7 +452,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves the inbound connection permissions for a fleet. Connection permissions include a range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. To get a fleet's inbound connection permissions, specify a fleet ID. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.

" + "documentation":"

Retrieves the inbound connection permissions for a fleet. Connection permissions include a range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. To get a fleet's inbound connection permissions, specify a fleet ID. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.

Fleet-related operations include:

" }, "DescribeFleetUtilization":{ "name":"DescribeFleetUtilization", @@ -340,7 +468,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves utilization statistics for one or more fleets. You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID. When specifying a list of fleet IDs, utilization objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

" + "documentation":"

Retrieves utilization statistics for one or more fleets. You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID. When specifying a list of fleet IDs, utilization objects are returned only for fleets that currently exist.

Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.

Fleet-related operations include:

" }, "DescribeGameSessionDetails":{ "name":"DescribeGameSessionDetails", @@ -357,7 +485,7 @@ {"shape":"UnauthorizedException"}, {"shape":"TerminalRoutingStrategyException"} ], - "documentation":"

Retrieves properties, including the protection policy in force, for one or more game sessions. This action can be used in several ways: (1) provide a GameSessionId or GameSessionArn to request details for a specific game session; (2) provide either a FleetId or an AliasId to request properties for all game sessions running on a fleet.

To get game session record(s), specify just one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionDetail object is returned for each session matching the request.

" + "documentation":"

Retrieves properties, including the protection policy in force, for one or more game sessions. This action can be used in several ways: (1) provide a GameSessionId or GameSessionArn to request details for a specific game session; (2) provide either a FleetId or an AliasId to request properties for all game sessions running on a fleet.

To get game session record(s), specify just one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionDetail object is returned for each session matching the request.

Game-session-related operations include:

" }, "DescribeGameSessionPlacement":{ "name":"DescribeGameSessionPlacement", @@ -373,7 +501,7 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves properties and current status of a game session placement request. To get game session placement details, specify the placement ID. If successful, a GameSessionPlacement object is returned.

" + "documentation":"

Retrieves properties and current status of a game session placement request. To get game session placement details, specify the placement ID. If successful, a GameSessionPlacement object is returned.

Game-session-related operations include:

" }, "DescribeGameSessionQueues":{ "name":"DescribeGameSessionQueues", @@ -389,7 +517,7 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the region.

" + "documentation":"

Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the region.

Queue-related operations include:

" }, "DescribeGameSessions":{ "name":"DescribeGameSessions", @@ -406,7 +534,7 @@ {"shape":"UnauthorizedException"}, {"shape":"TerminalRoutingStrategyException"} ], - "documentation":"

Retrieves a set of one or more game sessions. Request a specific game session or request all game sessions on a fleet. Alternatively, use SearchGameSessions to request a set of active game sessions that are filtered by certain criteria. To retrieve protection policy settings for game sessions, use DescribeGameSessionDetails.

To get game sessions, specify one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSession object is returned for each game session matching the request.

Available in Amazon GameLift Local.

" + "documentation":"

Retrieves a set of one or more game sessions. Request a specific game session or request all game sessions on a fleet. Alternatively, use SearchGameSessions to request a set of active game sessions that are filtered by certain criteria. To retrieve protection policy settings for game sessions, use DescribeGameSessionDetails.

To get game sessions, specify one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSession object is returned for each game session matching the request.

Available in Amazon GameLift Local.

Game-session-related operations include:

" }, "DescribeInstances":{ "name":"DescribeInstances", @@ -424,6 +552,52 @@ ], "documentation":"

Retrieves information about a fleet's instances, including instance IDs. Use this action to get details on all instances in the fleet or get details on one specific instance.

To get a specific instance, specify fleet ID and instance ID. To get all instances in a fleet, specify a fleet ID only. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, an Instance object is returned for each result.

" }, + "DescribeMatchmaking":{ + "name":"DescribeMatchmaking", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeMatchmakingInput"}, + "output":{"shape":"DescribeMatchmakingOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Retrieves a set of one or more matchmaking tickets. Use this operation to retrieve ticket information, including status and--once a successful match is made--acquire connection information for the resulting new game session.

You can use this operation to track the progress of matchmaking requests (through polling) as an alternative to using event notifications. See more details on tracking matchmaking requests through polling or notifications in StartMatchmaking.

You can request data for a one or a list of ticket IDs. If the request is successful, a ticket object is returned for each requested ID. When specifying a list of ticket IDs, objects are returned only for tickets that currently exist.

Matchmaking-related operations include:

" + }, + "DescribeMatchmakingConfigurations":{ + "name":"DescribeMatchmakingConfigurations", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeMatchmakingConfigurationsInput"}, + "output":{"shape":"DescribeMatchmakingConfigurationsOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Retrieves the details of FlexMatch matchmaking configurations. with this operation, you have the following options: (1) retrieve all existing configurations, (2) provide the names of one or more configurations to retrieve, or (3) retrieve all configurations that use a specified rule set name. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a configuration is returned for each requested name. When specifying a list of names, only configurations that currently exist are returned.

Operations related to match configurations and rule sets include:

" + }, + "DescribeMatchmakingRuleSets":{ + "name":"DescribeMatchmakingRuleSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeMatchmakingRuleSetsInput"}, + "output":{"shape":"DescribeMatchmakingRuleSetsOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalServiceException"}, + {"shape":"NotFoundException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Retrieves the details for FlexMatch matchmaking rule sets. You can request all existing rule sets for the region, or provide a list of one or more rule set names. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a rule set is returned for each requested name.

Operations related to match configurations and rule sets include:

" + }, "DescribePlayerSessions":{ "name":"DescribePlayerSessions", "http":{ @@ -438,7 +612,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves properties for one or more player sessions. This action can be used in several ways: (1) provide a PlayerSessionId to request properties for a specific player session; (2) provide a GameSessionId to request properties for all player sessions in the specified game session; (3) provide a PlayerId to request properties for all player sessions of a specified player.

To get game session record(s), specify only one of the following: a player session ID, a game session ID, or a player ID. You can filter this request by player session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a PlayerSession object is returned for each session matching the request.

Available in Amazon GameLift Local.

" + "documentation":"

Retrieves properties for one or more player sessions. This action can be used in several ways: (1) provide a PlayerSessionId to request properties for a specific player session; (2) provide a GameSessionId to request properties for all player sessions in the specified game session; (3) provide a PlayerId to request properties for all player sessions of a specified player.

To get game session record(s), specify only one of the following: a player session ID, a game session ID, or a player ID. You can filter this request by player session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a PlayerSession object is returned for each session matching the request.

Available in Amazon GameLift Local.

Player-session-related operations include:

" }, "DescribeRuntimeConfiguration":{ "name":"DescribeRuntimeConfiguration", @@ -454,7 +628,7 @@ {"shape":"InternalServiceException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves the current runtime configuration for the specified fleet. The runtime configuration tells Amazon GameLift how to launch server processes on instances in the fleet.

" + "documentation":"

Retrieves the current run-time configuration for the specified fleet. The run-time configuration tells Amazon GameLift how to launch server processes on instances in the fleet.

Fleet-related operations include:

" }, "DescribeScalingPolicies":{ "name":"DescribeScalingPolicies", @@ -470,7 +644,38 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"} ], - "documentation":"

Retrieves all scaling policies applied to a fleet.

To get a fleet's scaling policies, specify the fleet ID. You can filter this request by policy status, such as to retrieve only active scaling policies. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, set of ScalingPolicy objects is returned for the fleet.

" + "documentation":"

Retrieves all scaling policies applied to a fleet.

To get a fleet's scaling policies, specify the fleet ID. You can filter this request by policy status, such as to retrieve only active scaling policies. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, set of ScalingPolicy objects is returned for the fleet.

Fleet-related operations include:

" + }, + "DescribeVpcPeeringAuthorizations":{ + "name":"DescribeVpcPeeringAuthorizations", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeVpcPeeringAuthorizationsInput"}, + "output":{"shape":"DescribeVpcPeeringAuthorizationsOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Retrieves valid VPC peering authorizations that are pending for the AWS account. This operation returns all VPC peering authorizations and requests for peering. This includes those initiated and received by this account.

VPC peering connection operations include:

" + }, + "DescribeVpcPeeringConnections":{ + "name":"DescribeVpcPeeringConnections", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeVpcPeeringConnectionsInput"}, + "output":{"shape":"DescribeVpcPeeringConnectionsOutput"}, + "errors":[ + {"shape":"UnauthorizedException"}, + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"} + ], + "documentation":"

Retrieves information on VPC peering connections. Use this operation to get peering information for all fleets or for one specific fleet ID.

To retrieve connection information, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Specify a fleet ID or leave the parameter empty to retrieve all connection records. If successful, the retrieved information includes both active and pending connections. Active connections identify the IpV4 CIDR block that the VPC uses to connect.

VPC peering connection operations include:

" }, "GetGameSessionLogUrl":{ "name":"GetGameSessionLogUrl", @@ -486,7 +691,7 @@ {"shape":"UnauthorizedException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Retrieves the location of stored game session logs for a specified game session. When a game session is terminated, Amazon GameLift automatically stores the logs in Amazon S3. Use this URL to download the logs.

See the AWS Service Limits page for maximum log file sizes. Log files that exceed this limit are not saved.

" + "documentation":"

Retrieves the location of stored game session logs for a specified game session. When a game session is terminated, Amazon GameLift automatically stores the logs in Amazon S3 and retains them for 14 days. Use this URL to download the logs.

See the AWS Service Limits page for maximum log file sizes. Log files that exceed this limit are not saved.

Game-session-related operations include:

" }, "GetInstanceAccess":{ "name":"GetInstanceAccess", @@ -517,7 +722,7 @@ {"shape":"InvalidRequestException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Retrieves a collection of alias records for this AWS account. You can filter the result set by alias name and/or routing strategy type. Use the pagination parameters to retrieve results in sequential pages.

Aliases are not listed in any particular order.

" + "documentation":"

Retrieves all aliases for this AWS account. You can filter the result set by alias name and/or routing strategy type. Use the pagination parameters to retrieve results in sequential pages.

Returned aliases are not listed in any particular order.

Alias-related operations include:

" }, "ListBuilds":{ "name":"ListBuilds", @@ -532,7 +737,7 @@ {"shape":"InvalidRequestException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Retrieves build records for all builds associated with the AWS account in use. You can limit results to builds that are in a specific status by using the Status parameter. Use the pagination parameters to retrieve results in a set of sequential pages.

Build records are not listed in any particular order.

" + "documentation":"

Retrieves build records for all builds associated with the AWS account in use. You can limit results to builds that are in a specific status by using the Status parameter. Use the pagination parameters to retrieve results in a set of sequential pages.

Build records are not listed in any particular order.

Build-related operations include:

" }, "ListFleets":{ "name":"ListFleets", @@ -548,7 +753,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Retrieves a collection of fleet records for this AWS account. You can filter the result set by build ID. Use the pagination parameters to retrieve results in sequential pages.

Fleet records are not listed in any particular order.

" + "documentation":"

Retrieves a collection of fleet records for this AWS account. You can filter the result set by build ID. Use the pagination parameters to retrieve results in sequential pages.

Fleet records are not listed in any particular order.

Fleet-related operations include:

" }, "PutScalingPolicy":{ "name":"PutScalingPolicy", @@ -564,7 +769,7 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"} ], - "documentation":"

Creates or updates a scaling policy for a fleet. An active scaling policy prompts Amazon GameLift to track a certain metric for a fleet and automatically change the fleet's capacity in specific circumstances. Each scaling policy contains one rule statement. Fleets can have multiple scaling policies in force simultaneously.

A scaling policy rule statement has the following structure:

If [MetricName] is [ComparisonOperator] [Threshold] for [EvaluationPeriods] minutes, then [ScalingAdjustmentType] to/by [ScalingAdjustment].

For example, this policy: \"If the number of idle instances exceeds 20 for more than 15 minutes, then reduce the fleet capacity by 10 instances\" could be implemented as the following rule statement:

If [IdleInstances] is [GreaterThanOrEqualToThreshold] [20] for [15] minutes, then [ChangeInCapacity] by [-10].

To create or update a scaling policy, specify a unique combination of name and fleet ID, and set the rule values. All parameters for this action are required. If successful, the policy name is returned. Scaling policies cannot be suspended or made inactive. To stop enforcing a scaling policy, call DeleteScalingPolicy.

" + "documentation":"

Creates or updates a scaling policy for a fleet. An active scaling policy prompts Amazon GameLift to track a certain metric for a fleet and automatically change the fleet's capacity in specific circumstances. Each scaling policy contains one rule statement. Fleets can have multiple scaling policies in force simultaneously.

A scaling policy rule statement has the following structure:

If [MetricName] is [ComparisonOperator] [Threshold] for [EvaluationPeriods] minutes, then [ScalingAdjustmentType] to/by [ScalingAdjustment].

For example, this policy: \"If the number of idle instances exceeds 20 for more than 15 minutes, then reduce the fleet capacity by 10 instances\" could be implemented as the following rule statement:

If [IdleInstances] is [GreaterThanOrEqualToThreshold] [20] for [15] minutes, then [ChangeInCapacity] by [-10].

To create or update a scaling policy, specify a unique combination of name and fleet ID, and set the rule values. All parameters for this action are required. If successful, the policy name is returned. Scaling policies cannot be suspended or made inactive. To stop enforcing a scaling policy, call DeleteScalingPolicy.

Fleet-related operations include:

" }, "RequestUploadCredentials":{ "name":"RequestUploadCredentials", @@ -597,7 +802,7 @@ {"shape":"TerminalRoutingStrategyException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Retrieves the fleet ID that a specified alias is currently pointing to.

" + "documentation":"

Retrieves the fleet ID that a specified alias is currently pointing to.

Alias-related operations include:

" }, "SearchGameSessions":{ "name":"SearchGameSessions", @@ -614,7 +819,7 @@ {"shape":"UnauthorizedException"}, {"shape":"TerminalRoutingStrategyException"} ], - "documentation":"

Retrieves a set of game sessions that match a set of search criteria and sorts them in a specified order. Currently a game session search is limited to a single fleet. Search results include only game sessions that are in ACTIVE status. If you need to retrieve game sessions with a status other than active, use DescribeGameSessions. If you need to retrieve the protection policy for each game session, use DescribeGameSessionDetails.

You can search or sort by the following game session attributes:

To search or sort, specify either a fleet ID or an alias ID, and provide a search filter expression, a sort expression, or both. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of GameSession objects matching the request is returned.

Returned values for playerSessionCount and hasAvailablePlayerSessions change quickly as players join sessions and others drop out. Results should be considered a snapshot in time. Be sure to refresh search results often, and handle sessions that fill up before a player can join.

Available in Amazon GameLift Local.

" + "documentation":"

Retrieves a set of game sessions that match a set of search criteria and sorts them in a specified order. A game session search is limited to a single fleet. Search results include only game sessions that are in ACTIVE status. If you need to retrieve game sessions with a status other than active, use DescribeGameSessions. If you need to retrieve the protection policy for each game session, use DescribeGameSessionDetails.

You can search or sort by the following game session attributes:

To search or sort, specify either a fleet ID or an alias ID, and provide a search filter expression, a sort expression, or both. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of GameSession objects matching the request is returned.

Returned values for playerSessionCount and hasAvailablePlayerSessions change quickly as players join sessions and others drop out. Results should be considered a snapshot in time. Be sure to refresh search results often, and handle sessions that fill up before a player can join.

Game-session-related operations include:

" }, "StartGameSessionPlacement":{ "name":"StartGameSessionPlacement", @@ -630,7 +835,23 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift searches for available resources on the queue's destinations, scanning each until it finds resources or the placement request times out.

A game session placement request can also request player sessions. When a new game session is successfully created, Amazon GameLift creates a player session for each player included in the request.

When placing a game session, by default Amazon GameLift tries each fleet in the order they are listed in the queue configuration. Ideally, a queue's destinations are listed in preference order.

Alternatively, when requesting a game session with players, you can also provide latency data for each player in relevant regions. Latency data indicates the performance lag a player experiences when connected to a fleet in the region. Amazon GameLift uses latency data to reorder the list of destinations to place the game session in a region with minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each region's average lag for all players and reorders to get the best game play across all players.

To place a new game session request, specify the following:

If successful, a new game session placement is created.

To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the status is Fulfilled, a new game session has been created and a game session ARN and region are referenced. If the placement request times out, you can resubmit the request or retry it with a different queue.

" + "documentation":"

Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift searches for available resources on the queue's destinations, scanning each until it finds resources or the placement request times out.

A game session placement request can also request player sessions. When a new game session is successfully created, Amazon GameLift creates a player session for each player included in the request.

When placing a game session, by default Amazon GameLift tries each fleet in the order they are listed in the queue configuration. Ideally, a queue's destinations are listed in preference order.

Alternatively, when requesting a game session with players, you can also provide latency data for each player in relevant regions. Latency data indicates the performance lag a player experiences when connected to a fleet in the region. Amazon GameLift uses latency data to reorder the list of destinations to place the game session in a region with minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each region's average lag for all players and reorders to get the best game play across all players.

To place a new game session request, specify the following:

If successful, a new game session placement is created.

To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the status is FULFILLED, a new game session has been created and a game session ARN and region are referenced. If the placement request times out, you can resubmit the request or retry it with a different queue.

Game-session-related operations include:

" + }, + "StartMatchmaking":{ + "name":"StartMatchmaking", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartMatchmakingInput"}, + "output":{"shape":"StartMatchmakingOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Uses FlexMatch to create a game match for a group of players based on custom matchmaking rules, and starts a new game for the matched players. Each matchmaking request specifies the type of match to build (team configuration, rules for an acceptable match, etc.). The request also specifies the players to find a match for and where to host the new game session for optimal performance. A matchmaking request might start with a single player or a group of players who want to play together. FlexMatch finds additional players as needed to fill the match. Match type, rules, and the queue used to place a new game session are defined in a MatchmakingConfiguration. For complete information on setting up and using FlexMatch, see the topic Adding FlexMatch to Your Game.

To start matchmaking, provide a unique ticket ID, specify a matchmaking configuration, and include the players to be matched. You must also include a set of player attributes relevant for the matchmaking configuration. If successful, a matchmaking ticket is returned with status set to QUEUED. Track the status of the ticket to respond as needed and acquire game session connection information for successfully completed matches.

Tracking ticket status -- A couple of options are available for tracking the status of matchmaking requests:

Processing a matchmaking request -- FlexMatch handles a matchmaking request as follows:

  1. Your client code submits a StartMatchmaking request for one or more players and tracks the status of the request ticket.

  2. FlexMatch uses this ticket and others in process to build an acceptable match. When a potential match is identified, all tickets in the proposed match are advanced to the next status.

  3. If the match requires player acceptance (set in the matchmaking configuration), the tickets move into status REQUIRES_ACCEPTANCE. This status triggers your client code to solicit acceptance from all players in every ticket involved in the match, and then call AcceptMatch for each player. If any player rejects or fails to accept the match before a specified timeout, the proposed match is dropped (see AcceptMatch for more details).

  4. Once a match is proposed and accepted, the matchmaking tickets move into status PLACING. FlexMatch locates resources for a new game session using the game session queue (set in the matchmaking configuration) and creates the game session based on the match data.

  5. When the match is successfully placed, the matchmaking tickets move into COMPLETED status. Connection information (including game session endpoint and player session) is added to the matchmaking tickets. Matched players can use the connection information to join the game.

Matchmaking-related operations include:

" }, "StopGameSessionPlacement":{ "name":"StopGameSessionPlacement", @@ -646,7 +867,23 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Cancels a game session placement that is in Pending status. To stop a placement, provide the placement ID values. If successful, the placement is moved to Cancelled status.

" + "documentation":"

Cancels a game session placement that is in PENDING status. To stop a placement, provide the placement ID values. If successful, the placement is moved to CANCELLED status.

Game-session-related operations include:

" + }, + "StopMatchmaking":{ + "name":"StopMatchmaking", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopMatchmakingInput"}, + "output":{"shape":"StopMatchmakingOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Cancels a matchmaking ticket that is currently being processed. To stop the matchmaking operation, specify the ticket ID. If successful, work on the ticket is stopped, and the ticket status is changed to CANCELLED.

Matchmaking-related operations include:

" }, "UpdateAlias":{ "name":"UpdateAlias", @@ -662,7 +899,7 @@ {"shape":"NotFoundException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Updates properties for a fleet alias. To update properties, specify the alias ID to be updated and provide the information to be changed. To reassign an alias to another fleet, provide an updated routing strategy. If successful, the updated alias record is returned.

" + "documentation":"

Updates properties for an alias. To update properties, specify the alias ID to be updated and provide the information to be changed. To reassign an alias to another fleet, provide an updated routing strategy. If successful, the updated alias record is returned.

Alias-related operations include:

" }, "UpdateBuild":{ "name":"UpdateBuild", @@ -678,7 +915,7 @@ {"shape":"NotFoundException"}, {"shape":"InternalServiceException"} ], - "documentation":"

Updates metadata in a build record, including the build name and version. To update the metadata, specify the build ID to update and provide the new values. If successful, a build object containing the updated metadata is returned.

" + "documentation":"

Updates metadata in a build record, including the build name and version. To update the metadata, specify the build ID to update and provide the new values. If successful, a build object containing the updated metadata is returned.

Build-related operations include:

" }, "UpdateFleetAttributes":{ "name":"UpdateFleetAttributes", @@ -697,7 +934,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values you want to change. If successful, the fleet ID for the updated fleet is returned.

" + "documentation":"

Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values that you want to change. If successful, the fleet ID for the updated fleet is returned.

Fleet-related operations include:

" }, "UpdateFleetCapacity":{ "name":"UpdateFleetCapacity", @@ -716,7 +953,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.

If you're using autoscaling (see PutScalingPolicy), you may want to specify a minimum and/or maximum capacity. If you don't provide these, autoscaling can set capacity anywhere between zero and the service limits.

To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.

" + "documentation":"

Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.

If you're using autoscaling (see PutScalingPolicy), you may want to specify a minimum and/or maximum capacity. If you don't provide these, autoscaling can set capacity anywhere between zero and the service limits.

To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.

Fleet-related operations include:

" }, "UpdateFleetPortSettings":{ "name":"UpdateFleetPortSettings", @@ -735,7 +972,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations, and permissions you want to remove in InboundPermissionRevocations. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.

" + "documentation":"

Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations, and permissions you want to remove in InboundPermissionRevocations. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.

Fleet-related operations include:

" }, "UpdateGameSession":{ "name":"UpdateGameSession", @@ -753,7 +990,7 @@ {"shape":"InvalidGameSessionStatusException"}, {"shape":"InvalidRequestException"} ], - "documentation":"

Updates game session properties. This includes the session name, maximum player count, protection policy, which controls whether or not an active game session can be terminated during a scale-down event, and the player session creation policy, which controls whether or not new players can join the session. To update a game session, specify the game session ID and the values you want to change. If successful, an updated GameSession object is returned.

" + "documentation":"

Updates game session properties. This includes the session name, maximum player count, protection policy, which controls whether or not an active game session can be terminated during a scale-down event, and the player session creation policy, which controls whether or not new players can join the session. To update a game session, specify the game session ID and the values you want to change. If successful, an updated GameSession object is returned.

Game-session-related operations include:

" }, "UpdateGameSessionQueue":{ "name":"UpdateGameSessionQueue", @@ -769,7 +1006,23 @@ {"shape":"NotFoundException"}, {"shape":"UnauthorizedException"} ], - "documentation":"

Updates settings for a game session queue, which determines how new game session requests in the queue are processed. To update settings, specify the queue name to be updated and provide the new settings. When updating destinations, provide a complete list of destinations.

" + "documentation":"

Updates settings for a game session queue, which determines how new game session requests in the queue are processed. To update settings, specify the queue name to be updated and provide the new settings. When updating destinations, provide a complete list of destinations.

Queue-related operations include:

" + }, + "UpdateMatchmakingConfiguration":{ + "name":"UpdateMatchmakingConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateMatchmakingConfigurationInput"}, + "output":{"shape":"UpdateMatchmakingConfigurationOutput"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"NotFoundException"}, + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"} + ], + "documentation":"

Updates settings for a FlexMatch matchmaking configuration. To update settings, specify the configuration name to be updated and provide the new settings.

Operations related to match configurations and rule sets include:

" }, "UpdateRuntimeConfiguration":{ "name":"UpdateRuntimeConfiguration", @@ -786,10 +1039,60 @@ {"shape":"InvalidRequestException"}, {"shape":"InvalidFleetStatusException"} ], - "documentation":"

Updates the current runtime configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's runtime configuration at any time after the fleet is created; it does not need to be in an ACTIVE status.

To update runtime configuration, specify the fleet ID and provide a RuntimeConfiguration object with the updated collection of server process configurations.

Each instance in a Amazon GameLift fleet checks regularly for an updated runtime configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; they continue to run until they end, while Amazon GameLift simply adds new server processes to fit the current runtime configuration. As a result, the runtime configuration changes are applied gradually as existing processes shut down and new processes are launched in Amazon GameLift's normal process recycling activity.

" + "documentation":"

Updates the current run-time configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's run-time configuration at any time after the fleet is created; it does not need to be in an ACTIVE status.

To update run-time configuration, specify the fleet ID and provide a RuntimeConfiguration object with the updated collection of server process configurations.

Each instance in a Amazon GameLift fleet checks regularly for an updated run-time configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; they continue to run until they end, while Amazon GameLift simply adds new server processes to fit the current run-time configuration. As a result, the run-time configuration changes are applied gradually as existing processes shut down and new processes are launched in Amazon GameLift's normal process recycling activity.

Fleet-related operations include:

" + }, + "ValidateMatchmakingRuleSet":{ + "name":"ValidateMatchmakingRuleSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ValidateMatchmakingRuleSetInput"}, + "output":{"shape":"ValidateMatchmakingRuleSetOutput"}, + "errors":[ + {"shape":"InternalServiceException"}, + {"shape":"UnsupportedRegionException"}, + {"shape":"InvalidRequestException"} + ], + "documentation":"

Validates the syntax of a matchmaking rule or rule set. This operation checks that the rule set uses syntactically correct JSON and that it conforms to allowed property expressions. To validate syntax, provide a rule set string.

Operations related to match configurations and rule sets include:

" } }, "shapes":{ + "AcceptMatchInput":{ + "type":"structure", + "required":[ + "TicketId", + "PlayerIds", + "AcceptanceType" + ], + "members":{ + "TicketId":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking ticket. The ticket must be in status REQUIRES_ACCEPTANCE; otherwise this request will fail.

" + }, + "PlayerIds":{ + "shape":"MatchmakingPlayerIdList", + "documentation":"

Unique identifier for a player delivering the response. This parameter can include one or multiple player IDs.

" + }, + "AcceptanceType":{ + "shape":"AcceptanceType", + "documentation":"

Player response to the proposed match.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "AcceptMatchOutput":{ + "type":"structure", + "members":{ + } + }, + "AcceptanceType":{ + "type":"string", + "enum":[ + "ACCEPT", + "REJECT" + ] + }, "Alias":{ "type":"structure", "members":{ @@ -822,7 +1125,7 @@ "documentation":"

Time stamp indicating when this data object was last modified. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" } }, - "documentation":"

Properties describing a fleet alias.

Alias-related operations include:

" + "documentation":"

Properties describing a fleet alias.

Alias-related operations include:

" }, "AliasId":{ "type":"string", @@ -838,25 +1141,48 @@ "min":1, "pattern":"[a-zA-Z0-9:/-]+" }, + "AttributeValue":{ + "type":"structure", + "members":{ + "S":{ + "shape":"NonZeroAndMaxString", + "documentation":"

For single string values. Maximum string length is 100 characters.

" + }, + "N":{ + "shape":"DoubleObject", + "documentation":"

For number values, expressed as double.

" + }, + "SL":{ + "shape":"StringList", + "documentation":"

For a list of up to 10 strings. Maximum length for each string is 100 characters. Duplicate values are not recognized; all occurrences of the repeated value after the first of a repeated value are ignored.

" + }, + "SDM":{ + "shape":"StringDoubleMap", + "documentation":"

For a map of up to 10 type:value pairs. Maximum length for each string value is 100 characters.

" + } + }, + "documentation":"

Values for use in Player attribute type:value pairs. This object lets you specify an attribute value using any of the valid data types: string, number, string array or data map. Each AttributeValue object can use only one of the available properties.

" + }, "AwsCredentials":{ "type":"structure", "members":{ "AccessKeyId":{ "shape":"NonEmptyString", - "documentation":"

Access key for an AWS account.

" + "documentation":"

Temporary key allowing access to the Amazon GameLift S3 account.

" }, "SecretAccessKey":{ "shape":"NonEmptyString", - "documentation":"

Secret key for an AWS account.

" + "documentation":"

Temporary secret key allowing access to the Amazon GameLift S3 account.

" }, "SessionToken":{ "shape":"NonEmptyString", - "documentation":"

Token specific to a build ID.

" + "documentation":"

Token used to associate a specific build ID with the files uploaded using these credentials.

" } }, - "documentation":"

AWS access credentials sometimes used for uploading game build files to Amazon GameLift. They are valid for a limited time. If they expire before you upload your game build, get a new set by calling RequestUploadCredentials.

", + "documentation":"

Temporary access credentials used for uploading game build files to Amazon GameLift. They are valid for a limited time. If they expire before you upload your game build, get a new set by calling RequestUploadCredentials.

", "sensitive":true }, + "Boolean":{"type":"boolean"}, "Build":{ "type":"structure", "members":{ @@ -874,7 +1200,7 @@ }, "Status":{ "shape":"BuildStatus", - "documentation":"

Current status of the build.

Possible build statuses include the following:

" + "documentation":"

Current status of the build.

Possible build statuses include the following:

" }, "SizeOnDisk":{ "shape":"PositiveLong", @@ -1018,11 +1344,11 @@ }, "ServerLaunchPath":{ "shape":"NonZeroAndMaxString", - "documentation":"

This parameter is no longer used. Instead, specify a server launch path using the RuntimeConfiguration parameter. (Requests that specify a server launch path and launch parameters instead of a runtime configuration will continue to work.)

" + "documentation":"

This parameter is no longer used. Instead, specify a server launch path using the RuntimeConfiguration parameter. (Requests that specify a server launch path and launch parameters instead of a run-time configuration will continue to work.)

" }, "ServerLaunchParameters":{ "shape":"NonZeroAndMaxString", - "documentation":"

This parameter is no longer used. Instead, specify server launch parameters in the RuntimeConfiguration parameter. (Requests that specify a server launch path and launch parameters instead of a runtime configuration will continue to work.)

" + "documentation":"

This parameter is no longer used. Instead, specify server launch parameters in the RuntimeConfiguration parameter. (Requests that specify a server launch path and launch parameters instead of a run-time configuration will continue to work.)

" }, "LogPaths":{ "shape":"StringList", @@ -1038,11 +1364,11 @@ }, "NewGameSessionProtectionPolicy":{ "shape":"ProtectionPolicy", - "documentation":"

Game session protection policy to apply to all instances in this fleet. If this parameter is not set, instances in this fleet default to no protection. You can change a fleet's protection policy using UpdateFleetAttributes, but this change will only affect sessions created after the policy change. You can also set protection for individual instances using UpdateGameSession.

" + "documentation":"

Game session protection policy to apply to all instances in this fleet. If this parameter is not set, instances in this fleet default to no protection. You can change a fleet's protection policy using UpdateFleetAttributes, but this change will only affect sessions created after the policy change. You can also set protection for individual instances using UpdateGameSession.

" }, "RuntimeConfiguration":{ "shape":"RuntimeConfiguration", - "documentation":"

Instructions for launching server processes on each instance in the fleet. The runtime configuration for a fleet has a collection of server process configurations, one for each type of server process to run on an instance. A server process configuration specifies the location of the server executable, launch parameters, and the number of concurrent processes with that configuration to maintain on each instance. A CreateFleet request must include a runtime configuration with at least one server process configuration; otherwise the request will fail with an invalid request exception. (This parameter replaces the parameters ServerLaunchPath and ServerLaunchParameters; requests that contain values for these parameters instead of a runtime configuration will continue to work.)

" + "documentation":"

Instructions for launching server processes on each instance in the fleet. The run-time configuration for a fleet has a collection of server process configurations, one for each type of server process to run on an instance. A server process configuration specifies the location of the server executable, launch parameters, and the number of concurrent processes with that configuration to maintain on each instance. A CreateFleet request must include a run-time configuration with at least one server process configuration; otherwise the request fails with an invalid request exception. (This parameter replaces the parameters ServerLaunchPath and ServerLaunchParameters; requests that contain values for these parameters instead of a run-time configuration will continue to work.)

" }, "ResourceCreationLimitPolicy":{ "shape":"ResourceCreationLimitPolicy", @@ -1050,7 +1376,15 @@ }, "MetricGroups":{ "shape":"MetricGroupList", - "documentation":"

Names of metric groups to add this fleet to. Use an existing metric group name to add this fleet to the group, or use a new name to create a new metric group. Currently, a fleet can only be included in one metric group at a time.

" + "documentation":"

Names of metric groups to add this fleet to. Use an existing metric group name to add this fleet to the group. Or use a new name to create a new metric group. A fleet can only be included in one metric group at a time.

" + }, + "PeerVpcAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your Account ID in the AWS Management Console under account settings.

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1087,7 +1421,7 @@ }, "GameProperties":{ "shape":"GamePropertyList", - "documentation":"

Set of developer-defined properties for a game session. These properties are passed to the server process hosting the game session.

" + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" }, "CreatorId":{ "shape":"NonZeroAndMaxString", @@ -1095,11 +1429,15 @@ }, "GameSessionId":{ "shape":"IdStringModel", - "documentation":"

This parameter is no longer preferred. Please use IdempotencyToken instead. Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ID has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.)

" + "documentation":"

This parameter is no longer preferred. Please use IdempotencyToken instead. Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.)

" }, "IdempotencyToken":{ "shape":"IdStringModel", - "documentation":"

Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ID has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.)

" + "documentation":"

Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.) Idempotency tokens remain in use for 30 days after a game session has ended; game session objects are retained for this time period and then deleted.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" } }, "documentation":"

Represents the input for a request action.

" @@ -1120,11 +1458,11 @@ "members":{ "Name":{ "shape":"GameSessionQueueName", - "documentation":"

Descriptive label that is associated with queue. Queue names must be unique within each region.

" + "documentation":"

Descriptive label that is associated with game session queue. Queue names must be unique within each region.

" }, "TimeoutInSeconds":{ "shape":"WholeNumber", - "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" + "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" }, "PlayerLatencyPolicies":{ "shape":"PlayerLatencyPolicyList", @@ -1147,6 +1485,106 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "CreateMatchmakingConfigurationInput":{ + "type":"structure", + "required":[ + "Name", + "GameSessionQueueArns", + "RequestTimeoutSeconds", + "AcceptanceRequired", + "RuleSetName" + ], + "members":{ + "Name":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.

" + }, + "Description":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Meaningful description of the matchmaking configuration.

" + }, + "GameSessionQueueArns":{ + "shape":"QueueArnsList", + "documentation":"

Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>::fleet/fleet-a1234567-b8c9-0d1e-2fa3-b45c6d7e8912. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.

" + }, + "RequestTimeoutSeconds":{ + "shape":"MatchmakingRequestTimeoutInteger", + "documentation":"

Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that time out can be resubmitted as needed.

" + }, + "AcceptanceTimeoutSeconds":{ + "shape":"MatchmakingAcceptanceTimeoutInteger", + "documentation":"

Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.

" + }, + "AcceptanceRequired":{ + "shape":"Boolean", + "documentation":"

Flag that determines whether or not a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.

" + }, + "RuleSetName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.

" + }, + "NotificationTarget":{ + "shape":"SnsArnStringModel", + "documentation":"

SNS topic ARN that is set up to receive matchmaking notifications.

" + }, + "AdditionalPlayerCount":{ + "shape":"WholeNumber", + "documentation":"

Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.

" + }, + "CustomEventData":{ + "shape":"CustomEventData", + "documentation":"

Information to attached to all events related to the matchmaking configuration.

" + }, + "GameProperties":{ + "shape":"GamePropertyList", + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "CreateMatchmakingConfigurationOutput":{ + "type":"structure", + "members":{ + "Configuration":{ + "shape":"MatchmakingConfiguration", + "documentation":"

Object that describes the newly created matchmaking configuration.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, + "CreateMatchmakingRuleSetInput":{ + "type":"structure", + "required":[ + "Name", + "RuleSetBody" + ], + "members":{ + "Name":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set. This name is used to identify the rule set associated with a matchmaking configuration.

" + }, + "RuleSetBody":{ + "shape":"RuleSetBody", + "documentation":"

Collection of matchmaking rules, formatted as a JSON string. (Note that comments are not allowed in JSON, but most elements support a description field.)

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "CreateMatchmakingRuleSetOutput":{ + "type":"structure", + "required":["RuleSet"], + "members":{ + "RuleSet":{ + "shape":"MatchmakingRuleSet", + "documentation":"

Object that describes the newly created matchmaking rule set.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, "CreatePlayerSessionInput":{ "type":"structure", "required":[ @@ -1211,6 +1649,67 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "CreateVpcPeeringAuthorizationInput":{ + "type":"structure", + "required":[ + "GameLiftAwsAccountId", + "PeerVpcId" + ], + "members":{ + "GameLiftAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "CreateVpcPeeringAuthorizationOutput":{ + "type":"structure", + "members":{ + "VpcPeeringAuthorization":{ + "shape":"VpcPeeringAuthorization", + "documentation":"

Details on the requested VPC peering authorization, including expiration.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, + "CreateVpcPeeringConnectionInput":{ + "type":"structure", + "required":[ + "FleetId", + "PeerVpcAwsAccountId", + "PeerVpcId" + ], + "members":{ + "FleetId":{ + "shape":"FleetId", + "documentation":"

Unique identifier for a fleet. This tells Amazon GameLift which GameLift VPC to peer with.

" + }, + "PeerVpcAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your Account ID in the AWS Management Console under account settings.

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "CreateVpcPeeringConnectionOutput":{ + "type":"structure", + "members":{ + } + }, + "CustomEventData":{ + "type":"string", + "max":256, + "min":0 + }, "DeleteAliasInput":{ "type":"structure", "required":["AliasId"], @@ -1250,7 +1749,7 @@ "members":{ "Name":{ "shape":"GameSessionQueueName", - "documentation":"

Descriptive label that is associated with queue. Queue names must be unique within each region.

" + "documentation":"

Descriptive label that is associated with game session queue. Queue names must be unique within each region.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1260,6 +1759,22 @@ "members":{ } }, + "DeleteMatchmakingConfigurationInput":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking configuration

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DeleteMatchmakingConfigurationOutput":{ + "type":"structure", + "members":{ + } + }, "DeleteScalingPolicyInput":{ "type":"structure", "required":[ @@ -1278,6 +1793,52 @@ }, "documentation":"

Represents the input for a request action.

" }, + "DeleteVpcPeeringAuthorizationInput":{ + "type":"structure", + "required":[ + "GameLiftAwsAccountId", + "PeerVpcId" + ], + "members":{ + "GameLiftAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DeleteVpcPeeringAuthorizationOutput":{ + "type":"structure", + "members":{ + } + }, + "DeleteVpcPeeringConnectionInput":{ + "type":"structure", + "required":[ + "FleetId", + "VpcPeeringConnectionId" + ], + "members":{ + "FleetId":{ + "shape":"FleetId", + "documentation":"

Unique identifier for a fleet. This value must match the fleet ID referenced in the VPC peering connection record.

" + }, + "VpcPeeringConnectionId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC peering connection. This value is included in the VpcPeeringConnection object, which can be retrieved by calling DescribeVpcPeeringConnections.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DeleteVpcPeeringConnectionOutput":{ + "type":"structure", + "members":{ + } + }, "DescribeAliasInput":{ "type":"structure", "required":["AliasId"], @@ -1353,7 +1914,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1385,7 +1946,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1426,7 +1987,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1479,7 +2040,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1515,7 +2076,7 @@ }, "StatusFilter":{ "shape":"NonZeroAndMaxString", - "documentation":"

Game session status to filter results on. Possible game session statuses include ACTIVE, TERMINATED, ACTIVATING and TERMINATING (the last two are transitory).

" + "documentation":"

Game session status to filter results on. Possible game session statuses include ACTIVE, TERMINATED, ACTIVATING and TERMINATING (the last two are transitory).

" }, "Limit":{ "shape":"PositiveInteger", @@ -1523,7 +2084,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1576,7 +2137,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1620,7 +2181,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1657,7 +2218,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1676,6 +2237,96 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "DescribeMatchmakingConfigurationsInput":{ + "type":"structure", + "members":{ + "Names":{ + "shape":"MatchmakingIdList", + "documentation":"

Unique identifier for a matchmaking configuration(s) to retrieve. To request all existing configurations, leave this parameter empty.

" + }, + "RuleSetName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set. Use this parameter to retrieve all matchmaking configurations that use this rule set.

" + }, + "Limit":{ + "shape":"PositiveInteger", + "documentation":"

Maximum number of results to return. Use this parameter with NextToken to get results as a set of sequential pages. This parameter is limited to 10.

" + }, + "NextToken":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DescribeMatchmakingConfigurationsOutput":{ + "type":"structure", + "members":{ + "Configurations":{ + "shape":"MatchmakingConfigurationList", + "documentation":"

Collection of requested matchmaking configuration objects.

" + }, + "NextToken":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, + "DescribeMatchmakingInput":{ + "type":"structure", + "required":["TicketIds"], + "members":{ + "TicketIds":{ + "shape":"MatchmakingIdList", + "documentation":"

Unique identifier for a matchmaking ticket. To request all existing tickets, leave this parameter empty.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DescribeMatchmakingOutput":{ + "type":"structure", + "members":{ + "TicketList":{ + "shape":"MatchmakingTicketList", + "documentation":"

Collection of existing matchmaking ticket objects matching the request.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, + "DescribeMatchmakingRuleSetsInput":{ + "type":"structure", + "members":{ + "Names":{ + "shape":"MatchmakingRuleSetNameList", + "documentation":"

Unique identifier for a matchmaking rule set. This name is used to identify the rule set associated with a matchmaking configuration.

" + }, + "Limit":{ + "shape":"RuleSetLimit", + "documentation":"

Maximum number of results to return. Use this parameter with NextToken to get results as a set of sequential pages.

" + }, + "NextToken":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DescribeMatchmakingRuleSetsOutput":{ + "type":"structure", + "required":["RuleSets"], + "members":{ + "RuleSets":{ + "shape":"MatchmakingRuleSetList", + "documentation":"

Collection of requested matchmaking rule set objects.

" + }, + "NextToken":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, "DescribePlayerSessionsInput":{ "type":"structure", "members":{ @@ -1693,7 +2344,7 @@ }, "PlayerSessionStatusFilter":{ "shape":"NonZeroAndMaxString", - "documentation":"

Player session status to filter results on.

Possible player session statuses include the following:

" + "documentation":"

Player session status to filter results on.

Possible player session statuses include the following:

" }, "Limit":{ "shape":"PositiveInteger", @@ -1701,7 +2352,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value. If a player session ID is specified, this parameter is ignored.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. If a player session ID is specified, this parameter is ignored.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1726,7 +2377,7 @@ "members":{ "FleetId":{ "shape":"FleetId", - "documentation":"

Unique identifier for a fleet to get the runtime configuration for.

" + "documentation":"

Unique identifier for a fleet to get the run-time configuration for.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1751,7 +2402,7 @@ }, "StatusFilter":{ "shape":"ScalingStatusType", - "documentation":"

Scaling policy status to filter results on. A scaling policy is only in force when in an ACTIVE status.

" + "documentation":"

Scaling policy status to filter results on. A scaling policy is only in force when in an ACTIVE status.

" }, "Limit":{ "shape":"PositiveInteger", @@ -1759,7 +2410,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -1778,6 +2429,40 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "DescribeVpcPeeringAuthorizationsInput":{ + "type":"structure", + "members":{ + } + }, + "DescribeVpcPeeringAuthorizationsOutput":{ + "type":"structure", + "members":{ + "VpcPeeringAuthorizations":{ + "shape":"VpcPeeringAuthorizationList", + "documentation":"

Collection of objects that describe all valid VPC peering operations for the current AWS account.

" + } + } + }, + "DescribeVpcPeeringConnectionsInput":{ + "type":"structure", + "members":{ + "FleetId":{ + "shape":"FleetId", + "documentation":"

Unique identifier for a fleet.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "DescribeVpcPeeringConnectionsOutput":{ + "type":"structure", + "members":{ + "VpcPeeringConnections":{ + "shape":"VpcPeeringConnectionList", + "documentation":"

Collection of VPC peering connection records that match the request.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, "DesiredPlayerSession":{ "type":"structure", "members":{ @@ -1797,6 +2482,7 @@ "member":{"shape":"DesiredPlayerSession"} }, "Double":{"type":"double"}, + "DoubleObject":{"type":"double"}, "EC2InstanceCounts":{ "type":"structure", "members":{ @@ -1829,7 +2515,7 @@ "documentation":"

Number of instances in the fleet that are no longer active but haven't yet been terminated.

" } }, - "documentation":"

Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.

" + "documentation":"

Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.

Fleet-related operations include:

" }, "EC2InstanceLimit":{ "type":"structure", @@ -1875,6 +2561,12 @@ "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", + "r4.large", + "r4.xlarge", + "r4.2xlarge", + "r4.4xlarge", + "r4.8xlarge", + "r4.16xlarge", "m3.medium", "m3.large", "m3.xlarge", @@ -1899,7 +2591,7 @@ }, "EventCode":{ "shape":"EventCode", - "documentation":"

Type of event being logged.

" + "documentation":"

Type of event being logged. The following events are currently in use:

General events:

Fleet creation events:

VPC peering events:

Other fleet events:

" }, "Message":{ "shape":"NonEmptyString", @@ -1908,9 +2600,13 @@ "EventTime":{ "shape":"Timestamp", "documentation":"

Time stamp indicating when this event occurred. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + }, + "PreSignedLogUrl":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Location of stored logs with additional detail that is related to the event. This is useful for debugging issues. The URL is valid for 15 minutes. You can also access fleet creation logs through the Amazon GameLift console.

" } }, - "documentation":"

Log entry describing an event involving Amazon GameLift resources (such as a fleet). In addition to tracking activity, event codes and messages can provide additional information for troubleshooting and debugging problems.

" + "documentation":"

Log entry describing an event that involves Amazon GameLift resources (such as a fleet). In addition to tracking activity, event codes and messages can provide additional information for troubleshooting and debugging problems.

" }, "EventCode":{ "type":"string", @@ -1940,7 +2636,13 @@ "SERVER_PROCESS_TERMINATED_UNHEALTHY", "SERVER_PROCESS_FORCE_TERMINATED", "SERVER_PROCESS_PROCESS_EXIT_TIMEOUT", - "GAME_SESSION_ACTIVATION_TIMEOUT" + "GAME_SESSION_ACTIVATION_TIMEOUT", + "FLEET_CREATION_EXTRACTING_BUILD", + "FLEET_CREATION_RUNNING_INSTALLER", + "FLEET_CREATION_VALIDATING_RUNTIME_CONFIG", + "FLEET_VPC_PEERING_SUCCEEDED", + "FLEET_VPC_PEERING_FAILED", + "FLEET_VPC_PEERING_DELETED" ] }, "EventList":{ @@ -1976,7 +2678,7 @@ }, "Status":{ "shape":"FleetStatus", - "documentation":"

Current status of the fleet.

Possible fleet statuses include the following:

" + "documentation":"

Current status of the fleet.

Possible fleet statuses include the following:

" }, "BuildId":{ "shape":"BuildId", @@ -1984,19 +2686,19 @@ }, "ServerLaunchPath":{ "shape":"NonZeroAndMaxString", - "documentation":"

Path to a game server executable in the fleet's build, specified for fleets created prior to 2016-08-04 (or AWS SDK v. 0.12.16). Server launch paths for fleets created after this date are specified in the fleet's RuntimeConfiguration.

" + "documentation":"

Path to a game server executable in the fleet's build, specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch paths for fleets created after this date are specified in the fleet's RuntimeConfiguration.

" }, "ServerLaunchParameters":{ "shape":"NonZeroAndMaxString", - "documentation":"

Game server launch parameters specified for fleets created prior to 2016-08-04 (or AWS SDK v. 0.12.16). Server launch parameters for fleets created after this date are specified in the fleet's RuntimeConfiguration.

" + "documentation":"

Game server launch parameters specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch parameters for fleets created after this date are specified in the fleet's RuntimeConfiguration.

" }, "LogPaths":{ "shape":"StringList", - "documentation":"

Location of default log files. When a server process is shut down, Amazon GameLift captures and stores any log files in this location. These logs are in addition to game session logs; see more on game session logs in the Amazon GameLift Developer Guide. If no default log path for a fleet is specified, Amazon GameLift will automatically upload logs that are stored on each instance at C:\\game\\logs (for Windows) or /local/game/logs (for Linux). Use the Amazon GameLift console to access stored logs.

" + "documentation":"

Location of default log files. When a server process is shut down, Amazon GameLift captures and stores any log files in this location. These logs are in addition to game session logs; see more on game session logs in the Amazon GameLift Developer Guide. If no default log path for a fleet is specified, Amazon GameLift automatically uploads logs that are stored on each instance at C:\\game\\logs (for Windows) or /local/game/logs (for Linux). Use the Amazon GameLift console to access stored logs.

" }, "NewGameSessionProtectionPolicy":{ "shape":"ProtectionPolicy", - "documentation":"

Type of game session protection to set for all new instances started in the fleet.

" + "documentation":"

Type of game session protection to set for all new instances started in the fleet.

" }, "OperatingSystem":{ "shape":"OperatingSystem", @@ -2008,10 +2710,10 @@ }, "MetricGroups":{ "shape":"MetricGroupList", - "documentation":"

Names of metric groups that this fleet is included in. In Amazon CloudWatch, you can view metrics for an individual fleet or aggregated metrics for a fleets that are in a fleet metric group. Currently, a fleet can be included in only one metric group at a time.

" + "documentation":"

Names of metric groups that this fleet is included in. In Amazon CloudWatch, you can view metrics for an individual fleet or aggregated metrics for fleets that are in a fleet metric group. A fleet can be included in only one metric group at a time.

" } }, - "documentation":"

General properties describing a fleet.

" + "documentation":"

General properties describing a fleet.

Fleet-related operations include:

" }, "FleetAttributesList":{ "type":"list", @@ -2033,7 +2735,7 @@ "documentation":"

Current status of fleet capacity.

" } }, - "documentation":"

Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.

" + "documentation":"

Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.

Fleet-related operations include:

" }, "FleetCapacityExceededException":{ "type":"structure", @@ -2094,7 +2796,7 @@ "documentation":"

Maximum players allowed across all game sessions currently being hosted on all instances in the fleet.

" } }, - "documentation":"

Current status of fleet utilization, including the number of game and player sessions being hosted.

" + "documentation":"

Current status of fleet utilization, including the number of game and player sessions being hosted.

Fleet-related operations include:

" }, "FleetUtilizationList":{ "type":"list", @@ -2111,14 +2813,14 @@ "members":{ "Key":{ "shape":"GamePropertyKey", - "documentation":"

TBD

" + "documentation":"

Game property identifier.

" }, "Value":{ "shape":"GamePropertyValue", - "documentation":"

TBD

" + "documentation":"

Game property value.

" } }, - "documentation":"

Set of key-value pairs containing information a server process requires to set up a game session. This object allows you to pass in any set of data needed for your game. For more information, see the Amazon GameLift Developer Guide.

" + "documentation":"

Set of key-value pairs that contain information about a game session. When included in a game session request, these properties communicate details to be used when setting up the new game session, such as to specify a game mode, level, or map. Game properties are passed to the game server process when initiating a new game session; the server process uses the properties as appropriate. For more information, see the Amazon GameLift Developer Guide.

" }, "GamePropertyKey":{ "type":"string", @@ -2138,7 +2840,7 @@ "members":{ "GameSessionId":{ "shape":"NonZeroAndMaxString", - "documentation":"

Unique identifier for the game session. A game session ID has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.

" + "documentation":"

Unique identifier for the game session. A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>.

" }, "Name":{ "shape":"NonZeroAndMaxString", @@ -2146,7 +2848,7 @@ }, "FleetId":{ "shape":"FleetId", - "documentation":"

Unique identifier for a fleet the game session is running on.

" + "documentation":"

Unique identifier for a fleet that the game session is running on.

" }, "CreationTime":{ "shape":"Timestamp", @@ -2170,7 +2872,7 @@ }, "GameProperties":{ "shape":"GamePropertyList", - "documentation":"

Set of developer-defined properties for a game session. These properties are passed to the server process hosting the game session.

" + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" }, "IpAddress":{ "shape":"IpAddress", @@ -2187,15 +2889,46 @@ "CreatorId":{ "shape":"NonZeroAndMaxString", "documentation":"

Unique identifier for a player. This ID is used to enforce a resource protection policy (if one exists), that limits the number of game sessions a player can create.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" } }, - "documentation":"

Properties describing a game session.

" + "documentation":"

Properties describing a game session.

A game session in ACTIVE status can host players. When a game session ends, its status is set to TERMINATED.

Once the session ends, the game session object is retained for 30 days. This means you can reuse idempotency token values after this time. Game session logs are retained for 14 days.

Game-session-related operations include:

" }, "GameSessionActivationTimeoutSeconds":{ "type":"integer", "max":600, "min":1 }, + "GameSessionConnectionInfo":{ + "type":"structure", + "members":{ + "GameSessionArn":{ + "shape":"ArnStringModel", + "documentation":"

Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it.

" + }, + "IpAddress":{ + "shape":"StringModel", + "documentation":"

IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.

" + }, + "Port":{ + "shape":"PositiveInteger", + "documentation":"

Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.

" + }, + "MatchedPlayerSessions":{ + "shape":"MatchedPlayerSessionList", + "documentation":"

Collection of player session IDs, one for each player ID that was included in the original matchmaking request.

" + } + }, + "documentation":"

Connection information for the new game session that is created with matchmaking. (with StartMatchmaking). Once a match is set, the FlexMatch engine places the match and creates a new game session for it. This information, including the game session endpoint and player sessions for each player in the original matchmaking request, is added to the MatchmakingTicket, which can be retrieved by calling DescribeMatchmaking.

" + }, + "GameSessionData":{ + "type":"string", + "max":4096, + "min":1 + }, "GameSessionDetail":{ "type":"structure", "members":{ @@ -2205,7 +2938,7 @@ }, "ProtectionPolicy":{ "shape":"ProtectionPolicy", - "documentation":"

Current status of protection for the game session.

" + "documentation":"

Current status of protection for the game session.

" } }, "documentation":"

A game session's properties plus the protection policy currently in force.

" @@ -2235,15 +2968,15 @@ }, "GameSessionQueueName":{ "shape":"GameSessionQueueName", - "documentation":"

Descriptive label that is associated with queue. Queue names must be unique within each region.

" + "documentation":"

Descriptive label that is associated with game session queue. Queue names must be unique within each region.

" }, "Status":{ "shape":"GameSessionPlacementState", - "documentation":"

Current status of the game session placement request.

" + "documentation":"

Current status of the game session placement request.

" }, "GameProperties":{ "shape":"GamePropertyList", - "documentation":"

Set of developer-defined properties for a game session. These properties are passed to the server process hosting the game session.

" + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" }, "MaximumPlayerSessionCount":{ "shape":"WholeNumber", @@ -2255,19 +2988,19 @@ }, "GameSessionId":{ "shape":"NonZeroAndMaxString", - "documentation":"

Unique identifier for the game session. This value is set once the new game session is placed (placement status is Fulfilled).

" + "documentation":"

Unique identifier for the game session. This value is set once the new game session is placed (placement status is FULFILLED).

" }, "GameSessionArn":{ "shape":"NonZeroAndMaxString", - "documentation":"

Identifier for the game session created by this placement request. This value is set once the new game session is placed (placement status is Fulfilled). This identifier is unique across all regions. You can use this value as a GameSessionId value as needed.

" + "documentation":"

Identifier for the game session created by this placement request. This value is set once the new game session is placed (placement status is FULFILLED). This identifier is unique across all regions. You can use this value as a GameSessionId value as needed.

" }, "GameSessionRegion":{ "shape":"NonZeroAndMaxString", - "documentation":"

Name of the region where the game session created by this placement request is running. This value is set once the new game session is placed (placement status is Fulfilled).

" + "documentation":"

Name of the region where the game session created by this placement request is running. This value is set once the new game session is placed (placement status is FULFILLED).

" }, "PlayerLatencies":{ "shape":"PlayerLatencyList", - "documentation":"

Set of values, expressed in milliseconds, indicating the amount of latency that players are experiencing when connected to AWS regions.

" + "documentation":"

Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions.

" }, "StartTime":{ "shape":"Timestamp", @@ -2279,15 +3012,19 @@ }, "IpAddress":{ "shape":"IpAddress", - "documentation":"

IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is Fulfilled).

" + "documentation":"

IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is FULFILLED).

" }, "Port":{ "shape":"PortNumber", - "documentation":"

Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is Fulfilled).

" + "documentation":"

Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is FULFILLED).

" }, "PlacedPlayerSessions":{ "shape":"PlacedPlayerSessionList", - "documentation":"

Collection of information on player sessions created in response to the game session placement request. These player sessions are created only once a new game session is successfully placed (placement status is Fulfilled). This information includes the player ID (as provided in the placement request) and the corresponding player session ID. Retrieve full player sessions by calling DescribePlayerSessions with the player session ID.

" + "documentation":"

Collection of information on player sessions created in response to the game session placement request. These player sessions are created only once a new game session is successfully placed (placement status is FULFILLED). This information includes the player ID (as provided in the placement request) and the corresponding player session ID. Retrieve full player sessions by calling DescribePlayerSessions with the player session ID.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" } }, "documentation":"

Object that describes a StartGameSessionPlacement request. This object includes the full details of the original request plus the current status and start/end time stamps.

Game session placement-related operations include:

" @@ -2306,7 +3043,7 @@ "members":{ "Name":{ "shape":"GameSessionQueueName", - "documentation":"

Descriptive label that is associated with queue. Queue names must be unique within each region.

" + "documentation":"

Descriptive label that is associated with game session queue. Queue names must be unique within each region.

" }, "GameSessionQueueArn":{ "shape":"ArnStringModel", @@ -2314,7 +3051,7 @@ }, "TimeoutInSeconds":{ "shape":"WholeNumber", - "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" + "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" }, "PlayerLatencyPolicies":{ "shape":"PlayerLatencyPolicyList", @@ -2325,7 +3062,7 @@ "documentation":"

List of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order.

" } }, - "documentation":"

Configuration of a queue that is used to process game session placement requests. The queue configuration identifies several game features:

Queue-related operations include the following:

" + "documentation":"

Configuration of a queue that is used to process game session placement requests. The queue configuration identifies several game features:

Queue-related operations include:

" }, "GameSessionQueueDestination":{ "type":"structure", @@ -2335,7 +3072,7 @@ "documentation":"

Amazon Resource Name (ARN) assigned to fleet or fleet alias. ARNs, which include a fleet ID or alias ID and a region name, provide a unique identifier across all regions.

" } }, - "documentation":"

Fleet designated in a game session queue. Requests for new game sessions in the queue are fulfilled by starting a new game session on any destination configured for a queue.

" + "documentation":"

Fleet designated in a game session queue. Requests for new game sessions in the queue are fulfilled by starting a new game session on any destination configured for a queue.

Queue-related operations include:

" }, "GameSessionQueueDestinationList":{ "type":"list", @@ -2453,14 +3190,14 @@ }, "Status":{ "shape":"InstanceStatus", - "documentation":"

Current status of the instance. Possible statuses include the following:

" + "documentation":"

Current status of the instance. Possible statuses include the following:

" }, "CreationTime":{ "shape":"Timestamp", "documentation":"

Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" } }, - "documentation":"

Properties that describe an instance of a virtual computing resource that hosts one or more game servers. A fleet contains zero or more instances.

" + "documentation":"

Properties that describe an instance of a virtual computing resource that hosts one or more game servers. A fleet may contain zero or more instances.

" }, "InstanceAccess":{ "type":"structure", @@ -2594,6 +3331,11 @@ "UDP" ] }, + "LatencyMap":{ + "type":"map", + "key":{"shape":"NonEmptyString"}, + "value":{"shape":"PositiveInteger"} + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -2607,7 +3349,7 @@ "members":{ "RoutingStrategyType":{ "shape":"RoutingStrategyType", - "documentation":"

Type of routing to filter results on. Use this parameter to retrieve only aliases of a certain type. To retrieve all aliases, leave this parameter empty.

Possible routing types include the following:

" + "documentation":"

Type of routing to filter results on. Use this parameter to retrieve only aliases of a certain type. To retrieve all aliases, leave this parameter empty.

Possible routing types include the following:

" }, "Name":{ "shape":"NonEmptyString", @@ -2619,7 +3361,7 @@ }, "NextToken":{ "shape":"NonEmptyString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -2643,7 +3385,7 @@ "members":{ "Status":{ "shape":"BuildStatus", - "documentation":"

Build status to filter results by. To retrieve all builds, leave this parameter empty.

Possible build statuses include the following:

" + "documentation":"

Build status to filter results by. To retrieve all builds, leave this parameter empty.

Possible build statuses include the following:

" }, "Limit":{ "shape":"PositiveInteger", @@ -2651,7 +3393,7 @@ }, "NextToken":{ "shape":"NonEmptyString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -2683,7 +3425,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -2702,6 +3444,202 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "MatchedPlayerSession":{ + "type":"structure", + "members":{ + "PlayerId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a player

" + }, + "PlayerSessionId":{ + "shape":"PlayerSessionId", + "documentation":"

Unique identifier for a player session

" + } + }, + "documentation":"

Represents a new player session that is created as a result of a successful FlexMatch match. A successful match automatically creates new player sessions for every player ID in the original matchmaking request.

When players connect to the match's game session, they must include both player ID and player session ID in order to claim their assigned player slot.

" + }, + "MatchedPlayerSessionList":{ + "type":"list", + "member":{"shape":"MatchedPlayerSession"} + }, + "MatchmakingAcceptanceTimeoutInteger":{ + "type":"integer", + "max":600, + "min":1 + }, + "MatchmakingConfiguration":{ + "type":"structure", + "members":{ + "Name":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.

" + }, + "Description":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Descriptive label that is associated with matchmaking configuration.

" + }, + "GameSessionQueueArns":{ + "shape":"QueueArnsList", + "documentation":"

Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>::fleet/fleet-a1234567-b8c9-0d1e-2fa3-b45c6d7e8912. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.

" + }, + "RequestTimeoutSeconds":{ + "shape":"MatchmakingRequestTimeoutInteger", + "documentation":"

Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that time out can be resubmitted as needed.

" + }, + "AcceptanceTimeoutSeconds":{ + "shape":"MatchmakingAcceptanceTimeoutInteger", + "documentation":"

Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.

" + }, + "AcceptanceRequired":{ + "shape":"Boolean", + "documentation":"

Flag that determines whether or not a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.

" + }, + "RuleSetName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.

" + }, + "NotificationTarget":{ + "shape":"SnsArnStringModel", + "documentation":"

SNS topic ARN that is set up to receive matchmaking notifications.

" + }, + "AdditionalPlayerCount":{ + "shape":"WholeNumber", + "documentation":"

Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.

" + }, + "CustomEventData":{ + "shape":"CustomEventData", + "documentation":"

Information to attached to all events related to the matchmaking configuration.

" + }, + "CreationTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + }, + "GameProperties":{ + "shape":"GamePropertyList", + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + } + }, + "documentation":"

Guidelines for use with FlexMatch to match players into games. All matchmaking requests must specify a matchmaking configuration.

" + }, + "MatchmakingConfigurationList":{ + "type":"list", + "member":{"shape":"MatchmakingConfiguration"} + }, + "MatchmakingConfigurationStatus":{ + "type":"string", + "enum":[ + "CANCELLED", + "COMPLETED", + "FAILED", + "PLACING", + "QUEUED", + "REQUIRES_ACCEPTANCE", + "SEARCHING", + "TIMED_OUT" + ] + }, + "MatchmakingIdList":{ + "type":"list", + "member":{"shape":"MatchmakingIdStringModel"} + }, + "MatchmakingIdStringModel":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]+" + }, + "MatchmakingPlayerIdList":{ + "type":"list", + "member":{"shape":"PlayerIdStringModel"} + }, + "MatchmakingRequestTimeoutInteger":{ + "type":"integer", + "max":43200, + "min":1 + }, + "MatchmakingRuleSet":{ + "type":"structure", + "required":["RuleSetBody"], + "members":{ + "RuleSetName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set

" + }, + "RuleSetBody":{ + "shape":"RuleSetBody", + "documentation":"

Collection of matchmaking rules, formatted as a JSON string. (Note that comments14 are not allowed in JSON, but most elements support a description field.)

" + }, + "CreationTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + } + }, + "documentation":"

Set of rule statements, used with FlexMatch, that determine how to build a certain kind of player match. Each rule set describes a type of group to be created and defines the parameters for acceptable player matches. Rule sets are used in MatchmakingConfiguration objects.

A rule set may define the following elements for a match. For detailed information and examples showing how to construct a rule set, see Create Matchmaking Rules for Your Game.

" + }, + "MatchmakingRuleSetList":{ + "type":"list", + "member":{"shape":"MatchmakingRuleSet"} + }, + "MatchmakingRuleSetNameList":{ + "type":"list", + "member":{"shape":"MatchmakingIdStringModel"}, + "max":10, + "min":1 + }, + "MatchmakingTicket":{ + "type":"structure", + "members":{ + "TicketId":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking ticket.

" + }, + "ConfigurationName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Name of the MatchmakingConfiguration that is used with this ticket. Matchmaking configurations determine how players are grouped into a match and how a new game session is created for the match.

" + }, + "Status":{ + "shape":"MatchmakingConfigurationStatus", + "documentation":"

Current status of the matchmaking request.

" + }, + "StatusReason":{ + "shape":"StringModel", + "documentation":"

Code to explain the current status. For example, a status reason may indicate when a ticket has returned to SEARCHING status after a proposed match fails to receive player acceptances.

" + }, + "StatusMessage":{ + "shape":"StringModel", + "documentation":"

Additional information about the current status.

" + }, + "StartTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when this matchmaking request was received. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + }, + "EndTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when the matchmaking request stopped being processed due to successful completion, timeout, or cancellation. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + }, + "Players":{ + "shape":"PlayerList", + "documentation":"

A set of Player objects, each representing a player to find matches for. Players are identified by a unique player ID and may include latency data for use during matchmaking. If the ticket is in status COMPLETED, the Player objects include the team the players were assigned to in the resulting match.

" + }, + "GameSessionConnectionInfo":{ + "shape":"GameSessionConnectionInfo", + "documentation":"

Identifier and connection information of the game session created for the match. This information is added to the ticket only after the matchmaking request has been successfully completed.

" + }, + "EstimatedWaitTime":{ + "shape":"WholeNumber", + "documentation":"

Average amount of time (in seconds) that players are currently waiting for a match. If there is not enough recent data, this property may be empty.

" + } + }, + "documentation":"

Ticket generated to track the progress of a matchmaking request. Each ticket is uniquely identified by a ticket ID, supplied by the requester, when creating a matchmaking request with StartMatchmaking. Tickets can be retrieved by calling DescribeMatchmaking with the ticket ID.

" + }, + "MatchmakingTicketList":{ + "type":"list", + "member":{"shape":"MatchmakingTicket"} + }, "MaxConcurrentGameSessionActivations":{ "type":"integer", "max":2147483647, @@ -2779,12 +3717,39 @@ "documentation":"

Unique identifier for a player session.

" } }, - "documentation":"

Information about a player session that was created as part of a StartGameSessionPlacement request. This object contains only the player ID and player session ID. To retrieve full details on a player session, call DescribePlayerSessions with the player session ID.

" + "documentation":"

Information about a player session that was created as part of a StartGameSessionPlacement request. This object contains only the player ID and player session ID. To retrieve full details on a player session, call DescribePlayerSessions with the player session ID.

Player-session-related operations include:

" }, "PlacedPlayerSessionList":{ "type":"list", "member":{"shape":"PlacedPlayerSession"} }, + "Player":{ + "type":"structure", + "members":{ + "PlayerId":{ + "shape":"PlayerIdStringModel", + "documentation":"

Unique identifier for a player

" + }, + "PlayerAttributes":{ + "shape":"PlayerAttributeMap", + "documentation":"

Collection of name:value pairs containing player information for use in matchmaking. Player attribute names need to match playerAttributes names in the rule set being used. Example: \"PlayerAttributes\": {\"skill\": {\"N\": \"23\"}, \"gameMode\": {\"S\": \"deathmatch\"}}.

" + }, + "Team":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Name of the team that the player is assigned to in a match. Team names are defined in a matchmaking rule set.

" + }, + "LatencyInMs":{ + "shape":"LatencyMap", + "documentation":"

Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions. If this property is present, FlexMatch considers placing the match only in regions for which latency is reported.

If a matchmaker has a rule that evaluates player latency, players must report latency in order to be matched. If no latency is reported in this scenario, FlexMatch assumes that no regions are available to the player and the ticket is not matchable.

" + } + }, + "documentation":"

Represents a player in matchmaking. When starting a matchmaking request, a player has a player ID, attributes, and may have latency data. Team information is added after a match has been successfully completed.

" + }, + "PlayerAttributeMap":{ + "type":"map", + "key":{"shape":"NonZeroAndMaxString"}, + "value":{"shape":"AttributeValue"} + }, "PlayerData":{ "type":"string", "max":2048, @@ -2801,6 +3766,12 @@ "max":25, "min":1 }, + "PlayerIdStringModel":{ + "type":"string", + "max":128, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]+" + }, "PlayerLatency":{ "type":"structure", "members":{ @@ -2835,12 +3806,16 @@ "documentation":"

The length of time, in seconds, that the policy is enforced while placing a new game session. A null value for this property means that the policy is enforced until the queue times out.

" } }, - "documentation":"

Queue setting that determines the highest latency allowed for individual players when placing a game session. When a latency policy is in force, a game session cannot be placed at any destination in a region where a player is reporting latency higher than the cap. Latency policies are only enforced when the placement request contains player latency information.

Latency policy-related operations include:

" + "documentation":"

Queue setting that determines the highest latency allowed for individual players when placing a game session. When a latency policy is in force, a game session cannot be placed at any destination in a region where a player is reporting latency higher than the cap. Latency policies are only enforced when the placement request contains player latency information.

Queue-related operations include:

" }, "PlayerLatencyPolicyList":{ "type":"list", "member":{"shape":"PlayerLatencyPolicy"} }, + "PlayerList":{ + "type":"list", + "member":{"shape":"Player"} + }, "PlayerSession":{ "type":"structure", "members":{ @@ -2870,7 +3845,7 @@ }, "Status":{ "shape":"PlayerSessionStatus", - "documentation":"

Current status of the player session.

Possible player session statuses include the following:

" + "documentation":"

Current status of the player session.

Possible player session statuses include the following:

" }, "IpAddress":{ "shape":"IpAddress", @@ -2885,7 +3860,7 @@ "documentation":"

Developer-defined information related to a player. Amazon GameLift does not use this data, so it can be formatted as needed for use in the game.

" } }, - "documentation":"

Properties describing a player session. A player session represents either a player reservation for a game session or actual player activity in a game session. A player session object (including player data) is automatically passed to a game session when the player connects to the game session and is validated.

Player session-related operations include:

" + "documentation":"

Properties describing a player session. Player session objects are created either by creating a player session for a specific game session, or as part of a game session placement. A player session represents either a player reservation for a game session (status RESERVED) or actual player activity in a game session (status ACTIVE). A player session object (including player data) is automatically passed to a game session when the player connects to the game session and is validated.

When a player disconnects, the player session status changes to COMPLETED. Once the session ends, the player session object is retained for 30 days and then removed.

Player-session-related operations include:

" }, "PlayerSessionCreationPolicy":{ "type":"string", @@ -2958,7 +3933,7 @@ }, "ScalingAdjustmentType":{ "shape":"ScalingAdjustmentType", - "documentation":"

Type of adjustment to make to a fleet's instance count (see FleetCapacity):

" + "documentation":"

Type of adjustment to make to a fleet's instance count (see FleetCapacity):

" }, "Threshold":{ "shape":"Double", @@ -2974,7 +3949,7 @@ }, "MetricName":{ "shape":"MetricName", - "documentation":"

Name of the Amazon GameLift-defined metric that is used to trigger an adjustment.

" + "documentation":"

Name of the Amazon GameLift-defined metric that is used to trigger an adjustment.

" } }, "documentation":"

Represents the input for a request action.

" @@ -2989,6 +3964,10 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "QueueArnsList":{ + "type":"list", + "member":{"shape":"ArnStringModel"} + }, "RequestUploadCredentialsInput":{ "type":"structure", "required":["BuildId"], @@ -3054,7 +4033,7 @@ "members":{ "Type":{ "shape":"RoutingStrategyType", - "documentation":"

Type of routing strategy.

Possible routing types include the following:

" + "documentation":"

Type of routing strategy.

Possible routing types include the following:

" }, "FleetId":{ "shape":"FleetId", @@ -3065,7 +4044,7 @@ "documentation":"

Message text to be used with a terminal routing strategy.

" } }, - "documentation":"

Routing configuration for a fleet alias.

" + "documentation":"

Routing configuration for a fleet alias.

Fleet-related operations include:

" }, "RoutingStrategyType":{ "type":"string", @@ -3074,6 +4053,16 @@ "TERMINAL" ] }, + "RuleSetBody":{ + "type":"string", + "max":65535, + "min":1 + }, + "RuleSetLimit":{ + "type":"integer", + "max":10, + "min":1 + }, "RuntimeConfiguration":{ "type":"structure", "members":{ @@ -3083,14 +4072,14 @@ }, "MaxConcurrentGameSessionActivations":{ "shape":"MaxConcurrentGameSessionActivations", - "documentation":"

Maximum number of game sessions with status ACTIVATING to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.

" + "documentation":"

Maximum number of game sessions with status ACTIVATING to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.

" }, "GameSessionActivationTimeoutSeconds":{ "shape":"GameSessionActivationTimeoutSeconds", - "documentation":"

Maximum amount of time (in seconds) that a game session can remain in status ACTIVATING. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED.

" + "documentation":"

Maximum amount of time (in seconds) that a game session can remain in status ACTIVATING. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED.

" } }, - "documentation":"

Collection of server process configurations that describe what processes should be run on each instance in a fleet. An instance can launch and maintain multiple server processes based on the runtime configuration; it regularly checks for an updated runtime configuration and starts new server processes to match the latest version.

The key purpose of a runtime configuration with multiple server process configurations is to be able to run more than one kind of game server in a single fleet. You can include configurations for more than one server executable in order to run two or more different programs to run on the same instance. This option might be useful, for example, to run more than one version of your game server on the same fleet. Another option is to specify configurations for the same server executable but with different launch parameters.

A Amazon GameLift instance is limited to 50 processes running simultaneously. To calculate the total number of processes specified in a runtime configuration, add the values of the ConcurrentExecutions parameter for each ServerProcess object in the runtime configuration.

" + "documentation":"

A collection of server process configurations that describe what processes to run on each instance in a fleet. All fleets must have a run-time configuration. Each instance in the fleet launches the server processes specified in the run-time configuration and launches new ones as existing processes end. Each instance regularly checks for an updated run-time configuration and follows the new instructions.

The run-time configuration enables the instances in a fleet to run multiple processes simultaneously. Potential scenarios are as follows: (1) Run multiple processes of a single game server executable to maximize usage of your hosting resources. (2) Run one or more processes of different build executables, such as your game server executable and a related program, or two or more different versions of a game server. (3) Run multiple processes of a single game server but with different launch parameters, for example to run one process on each instance in debug mode.

A Amazon GameLift instance is limited to 50 processes running simultaneously. A run-time configuration must specify fewer than this limit. To calculate the total number of processes specified in a run-time configuration, add the values of the ConcurrentExecutions parameter for each ServerProcess object in the run-time configuration.

Fleet-related operations include:

" }, "S3Location":{ "type":"structure", @@ -3131,7 +4120,7 @@ }, "Status":{ "shape":"ScalingStatusType", - "documentation":"

Current status of the scaling policy. The scaling policy is only in force when in an ACTIVE status.

" + "documentation":"

Current status of the scaling policy. The scaling policy is only in force when in an ACTIVE status.

" }, "ScalingAdjustment":{ "shape":"Integer", @@ -3139,7 +4128,7 @@ }, "ScalingAdjustmentType":{ "shape":"ScalingAdjustmentType", - "documentation":"

Type of adjustment to make to a fleet's instance count (see FleetCapacity):

" + "documentation":"

Type of adjustment to make to a fleet's instance count (see FleetCapacity):

" }, "ComparisonOperator":{ "shape":"ComparisonOperatorType", @@ -3155,10 +4144,10 @@ }, "MetricName":{ "shape":"MetricName", - "documentation":"

Name of the Amazon GameLift-defined metric that is used to trigger an adjustment.

" + "documentation":"

Name of the Amazon GameLift-defined metric that is used to trigger an adjustment.

" } }, - "documentation":"

Rule that controls how a fleet is scaled. Scaling policies are uniquely identified by the combination of name and fleet ID.

" + "documentation":"

Rule that controls how a fleet is scaled. Scaling policies are uniquely identified by the combination of name and fleet ID.

Fleet-related operations include:

" }, "ScalingPolicyList":{ "type":"list", @@ -3201,7 +4190,7 @@ }, "NextToken":{ "shape":"NonZeroAndMaxString", - "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To specify the start of the result set, do not specify a value.

" + "documentation":"

Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.

" } }, "documentation":"

Represents the input for a request action.

" @@ -3248,6 +4237,12 @@ "max":50, "min":1 }, + "SnsArnStringModel":{ + "type":"string", + "max":300, + "min":0, + "pattern":"[a-zA-Z0-9:_/-]*" + }, "StartGameSessionPlacementInput":{ "type":"structure", "required":[ @@ -3266,7 +4261,7 @@ }, "GameProperties":{ "shape":"GamePropertyList", - "documentation":"

Set of developer-defined properties for a game session. These properties are passed to the server process hosting the game session.

" + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" }, "MaximumPlayerSessionCount":{ "shape":"WholeNumber", @@ -3278,11 +4273,15 @@ }, "PlayerLatencies":{ "shape":"PlayerLatencyList", - "documentation":"

Set of values, expressed in milliseconds, indicating the amount of latency that players are experiencing when connected to AWS regions. This information is used to try to place the new game session where it can offer the best possible gameplay experience for the players.

" + "documentation":"

Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions. This information is used to try to place the new game session where it can offer the best possible gameplay experience for the players.

" }, "DesiredPlayerSessions":{ "shape":"DesiredPlayerSessionList", "documentation":"

Set of information on each player to create a player session for.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session).

" } }, "documentation":"

Represents the input for a request action.

" @@ -3297,6 +4296,38 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "StartMatchmakingInput":{ + "type":"structure", + "required":[ + "ConfigurationName", + "Players" + ], + "members":{ + "TicketId":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking ticket. Use this identifier to track the matchmaking ticket status and retrieve match results.

" + }, + "ConfigurationName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Name of the matchmaking configuration to use for this request. Matchmaking configurations must exist in the same region as this request.

" + }, + "Players":{ + "shape":"PlayerList", + "documentation":"

Information on each player to be matched. This information must include a player ID, and may contain player attributes and latency data to be used in the matchmaking process. After a successful match, Player objects contain the name of the team the player is assigned to.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "StartMatchmakingOutput":{ + "type":"structure", + "members":{ + "MatchmakingTicket":{ + "shape":"MatchmakingTicket", + "documentation":"

Ticket representing the matchmaking request. This object include the information included in the request, ticket status, and match results as generated during the matchmaking process.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, "StopGameSessionPlacementInput":{ "type":"structure", "required":["PlacementId"], @@ -3313,15 +4344,37 @@ "members":{ "GameSessionPlacement":{ "shape":"GameSessionPlacement", - "documentation":"

Object that describes the canceled game session placement, with Cancelled status and an end time stamp.

" + "documentation":"

Object that describes the canceled game session placement, with CANCELLED status and an end time stamp.

" } }, "documentation":"

Represents the returned data in response to a request action.

" }, + "StopMatchmakingInput":{ + "type":"structure", + "required":["TicketId"], + "members":{ + "TicketId":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking ticket.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "StopMatchmakingOutput":{ + "type":"structure", + "members":{ + } + }, + "StringDoubleMap":{ + "type":"map", + "key":{"shape":"NonZeroAndMaxString"}, + "value":{"shape":"DoubleObject"} + }, "StringList":{ "type":"list", "member":{"shape":"NonZeroAndMaxString"} }, + "StringModel":{"type":"string"}, "TerminalRoutingStrategyException":{ "type":"structure", "members":{ @@ -3339,6 +4392,14 @@ "documentation":"

The client failed authentication. Clients should not retry such requests.

", "exception":true }, + "UnsupportedRegionException":{ + "type":"structure", + "members":{ + "Message":{"shape":"NonEmptyString"} + }, + "documentation":"

The requested operation is not supported in the region specified.

", + "exception":true + }, "UpdateAliasInput":{ "type":"structure", "required":["AliasId"], @@ -3419,7 +4480,7 @@ }, "NewGameSessionProtectionPolicy":{ "shape":"ProtectionPolicy", - "documentation":"

Game session protection policy to apply to all new instances created in this fleet. Instances that already exist are not affected. You can set protection for individual instances using UpdateGameSession.

" + "documentation":"

Game session protection policy to apply to all new instances created in this fleet. Instances that already exist are not affected. You can set protection for individual instances using UpdateGameSession.

" }, "ResourceCreationLimitPolicy":{ "shape":"ResourceCreationLimitPolicy", @@ -3427,7 +4488,7 @@ }, "MetricGroups":{ "shape":"MetricGroupList", - "documentation":"

Names of metric groups to include this fleet with. A fleet metric group is used in Amazon CloudWatch to aggregate metrics from multiple fleets. Use an existing metric group name to add this fleet to the group, or use a new name to create a new metric group. Currently, a fleet can only be included in one metric group at a time.

" + "documentation":"

Names of metric groups to include this fleet in. Amazon CloudWatch uses a fleet metric group is to aggregate metrics from multiple fleets. Use an existing metric group name to add this fleet to the group. Or use a new name to create a new metric group. A fleet can only be included in one metric group at a time.

" } }, "documentation":"

Represents the input for a request action.

" @@ -3526,7 +4587,7 @@ }, "ProtectionPolicy":{ "shape":"ProtectionPolicy", - "documentation":"

Game session protection policy to apply to this game session only.

" + "documentation":"

Game session protection policy to apply to this game session only.

" } }, "documentation":"

Represents the input for a request action.

" @@ -3547,11 +4608,11 @@ "members":{ "Name":{ "shape":"GameSessionQueueName", - "documentation":"

Descriptive label that is associated with queue. Queue names must be unique within each region.

" + "documentation":"

Descriptive label that is associated with game session queue. Queue names must be unique within each region.

" }, "TimeoutInSeconds":{ "shape":"WholeNumber", - "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" + "documentation":"

Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT status.

" }, "PlayerLatencyPolicies":{ "shape":"PlayerLatencyPolicyList", @@ -3574,6 +4635,71 @@ }, "documentation":"

Represents the returned data in response to a request action.

" }, + "UpdateMatchmakingConfigurationInput":{ + "type":"structure", + "required":["Name"], + "members":{ + "Name":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking configuration to update.

" + }, + "Description":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Descriptive label that is associated with matchmaking configuration.

" + }, + "GameSessionQueueArns":{ + "shape":"QueueArnsList", + "documentation":"

Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>::fleet/fleet-a1234567-b8c9-0d1e-2fa3-b45c6d7e8912. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.

" + }, + "RequestTimeoutSeconds":{ + "shape":"MatchmakingRequestTimeoutInteger", + "documentation":"

Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that time out can be resubmitted as needed.

" + }, + "AcceptanceTimeoutSeconds":{ + "shape":"MatchmakingAcceptanceTimeoutInteger", + "documentation":"

Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.

" + }, + "AcceptanceRequired":{ + "shape":"Boolean", + "documentation":"

Flag that determines whether or not a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.

" + }, + "RuleSetName":{ + "shape":"MatchmakingIdStringModel", + "documentation":"

Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.

" + }, + "NotificationTarget":{ + "shape":"SnsArnStringModel", + "documentation":"

SNS topic ARN that is set up to receive matchmaking notifications. See Setting up Notifications for Matchmaking for more information.

" + }, + "AdditionalPlayerCount":{ + "shape":"WholeNumber", + "documentation":"

Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.

" + }, + "CustomEventData":{ + "shape":"CustomEventData", + "documentation":"

Information to attached to all events related to the matchmaking configuration.

" + }, + "GameProperties":{ + "shape":"GamePropertyList", + "documentation":"

Set of developer-defined properties for a game session, formatted as a set of type:value pairs. These properties are included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + }, + "GameSessionData":{ + "shape":"GameSessionData", + "documentation":"

Set of developer-defined game session properties, formatted as a single string value. This data is included in the GameSession object, which is passed to the game server with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "UpdateMatchmakingConfigurationOutput":{ + "type":"structure", + "members":{ + "Configuration":{ + "shape":"MatchmakingConfiguration", + "documentation":"

Object that describes the updated matchmaking configuration.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, "UpdateRuntimeConfigurationInput":{ "type":"structure", "required":[ @@ -3583,11 +4709,11 @@ "members":{ "FleetId":{ "shape":"FleetId", - "documentation":"

Unique identifier for a fleet to update runtime configuration for.

" + "documentation":"

Unique identifier for a fleet to update run-time configuration for.

" }, "RuntimeConfiguration":{ "shape":"RuntimeConfiguration", - "documentation":"

Instructions for launching server processes on each instance in the fleet. The runtime configuration for a fleet has a collection of server process configurations, one for each type of server process to run on an instance. A server process configuration specifies the location of the server executable, launch parameters, and the number of concurrent processes with that configuration to maintain on each instance.

" + "documentation":"

Instructions for launching server processes on each instance in the fleet. The run-time configuration for a fleet has a collection of server process configurations, one for each type of server process to run on an instance. A server process configuration specifies the location of the server executable, launch parameters, and the number of concurrent processes with that configuration to maintain on each instance.

" } }, "documentation":"

Represents the input for a request action.

" @@ -3597,15 +4723,114 @@ "members":{ "RuntimeConfiguration":{ "shape":"RuntimeConfiguration", - "documentation":"

The runtime configuration currently in force. If the update was successful, this object matches the one in the request.

" + "documentation":"

The run-time configuration currently in force. If the update was successful, this object matches the one in the request.

" + } + }, + "documentation":"

Represents the returned data in response to a request action.

" + }, + "ValidateMatchmakingRuleSetInput":{ + "type":"structure", + "required":["RuleSetBody"], + "members":{ + "RuleSetBody":{ + "shape":"RuleSetBody", + "documentation":"

Collection of matchmaking rules to validate, formatted as a JSON string.

" + } + }, + "documentation":"

Represents the input for a request action.

" + }, + "ValidateMatchmakingRuleSetOutput":{ + "type":"structure", + "members":{ + "Valid":{ + "shape":"Boolean", + "documentation":"

Response indicating whether or not the rule set is valid.

" } }, "documentation":"

Represents the returned data in response to a request action.

" }, + "VpcPeeringAuthorization":{ + "type":"structure", + "members":{ + "GameLiftAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.

" + }, + "PeerVpcAwsAccountId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" + }, + "CreationTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when this authorization was issued. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + }, + "ExpirationTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp indicating when this authorization expires (24 hours after issuance). Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").

" + } + }, + "documentation":"

Represents an authorization for a VPC peering connection between the VPC for an Amazon GameLift fleet and another VPC on an account you have access to. This authorization must exist and be valid for the peering connection to be established. Authorizations are valid for 24 hours after they are issued.

VPC peering connection operations include:

" + }, + "VpcPeeringAuthorizationList":{ + "type":"list", + "member":{"shape":"VpcPeeringAuthorization"} + }, + "VpcPeeringConnection":{ + "type":"structure", + "members":{ + "FleetId":{ + "shape":"FleetId", + "documentation":"

Unique identifier for a fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet.

" + }, + "IpV4CidrBlock":{ + "shape":"NonZeroAndMaxString", + "documentation":"

CIDR block of IPv4 addresses assigned to the VPC peering connection for the GameLift VPC. The peered VPC also has an IPv4 CIDR block associated with it; these blocks cannot overlap or the peering connection cannot be created.

" + }, + "VpcPeeringConnectionId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier that is automatically assigned to the connection record. This ID is referenced in VPC peering connection events, and is used when deleting a connection with DeleteVpcPeeringConnection.

" + }, + "Status":{ + "shape":"VpcPeeringConnectionStatus", + "documentation":"

Object that contains status information about the connection. Status indicates if a connection is pending, successful, or failed.

" + }, + "PeerVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. To get VPC information, including IDs, use the Virtual Private Cloud service tools, including the VPC Dashboard in the AWS Management Console.

" + }, + "GameLiftVpcId":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Unique identifier for the VPC that contains the Amazon GameLift fleet for this connection. This VPC is managed by Amazon GameLift and does not appear in your AWS account.

" + } + }, + "documentation":"

Represents a peering connection between a VPC on one of your AWS accounts and the VPC for your Amazon GameLift fleets. This record may be for an active peering connection or a pending connection that has not yet been established.

VPC peering connection operations include:

" + }, + "VpcPeeringConnectionList":{ + "type":"list", + "member":{"shape":"VpcPeeringConnection"} + }, + "VpcPeeringConnectionStatus":{ + "type":"structure", + "members":{ + "Code":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Code indicating the status of a VPC peering connection.

" + }, + "Message":{ + "shape":"NonZeroAndMaxString", + "documentation":"

Additional messaging associated with the connection status.

" + } + }, + "documentation":"

Represents status information for a VPC peering connection. Status is associated with a VpcPeeringConnection object. Status codes and messages are provided from EC2 (). Connection status information is also communicated as a fleet Event.

" + }, "WholeNumber":{ "type":"integer", "min":0 } }, - "documentation":"Amazon GameLift Service

Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Amazon GameLift provides tools to acquire computing resources and deploy game servers, scale game server capacity to meet player demand, and track in-depth metrics on player usage and server performance.

The Amazon GameLift service API includes important features:

This reference guide describes the low-level service API for Amazon GameLift. We recommend using either the Amazon Web Services software development kit (AWS SDK), available in multiple languages, or the AWS command-line interface (CLI) tool. Both of these align with the low-level service API. In addition, you can use the AWS Management Console for Amazon GameLift for many administrative actions.

You can use some API actions with Amazon GameLift Local, a testing tool that lets you test your game integration locally before deploying on Amazon GameLift. You can call these APIs from the AWS CLI or programmatically; API calls to Amazon GameLift Local servers perform exactly as they do when calling Amazon GameLift web servers. For more information on using Amazon GameLift Local, see Testing an Integration.

MORE RESOURCES

API SUMMARY

This list offers a functional overview of the Amazon GameLift service API.

Finding Games and Joining Players

You can enable players to connect to game servers on Amazon GameLift from a game client or through a game service (such as a matchmaking service). You can use these operations to discover actively running game or start new games. You can also match players to games, either singly or as a group.

Setting Up and Managing Game Servers

When setting up Amazon GameLift, first create a game build and upload the files to Amazon GameLift. Then use these operations to set up a fleet of resources to run your game servers. Manage games to scale capacity, adjust configuration settings, access raw utilization data, and more.

" + "documentation":"Amazon GameLift Service

Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Amazon GameLift provides tools for the following tasks: (1) acquire computing resources and deploy game servers, (2) scale game server capacity to meet player demand, (3) host game sessions and manage player access, and (4) track in-depth metrics on player usage and server performance.

The Amazon GameLift service API includes two important function sets:

This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools:

MORE RESOURCES

API SUMMARY

This list offers a functional overview of the Amazon GameLift service API.

Managing Games and Players

Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions.

Setting Up and Managing Game Servers

When setting up Amazon GameLift resources for your game, you first create a game build and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more.

" } diff --git a/services/greengrass/src/main/resources/codegen-resources/service-2.json b/services/greengrass/src/main/resources/codegen-resources/service-2.json index 2a2ed8cceaad..00ef8d1da986 100644 --- a/services/greengrass/src/main/resources/codegen-resources/service-2.json +++ b/services/greengrass/src/main/resources/codegen-resources/service-2.json @@ -1114,6 +1114,26 @@ "errors" : [ ], "documentation" : "Retrieves a list of subscription definitions." }, + "ResetDeployments" : { + "name" : "ResetDeployments", + "http" : { + "method" : "POST", + "requestUri" : "/greengrass/groups/{GroupId}/deployments/$reset", + "responseCode" : 200 + }, + "input" : { + "shape" : "ResetDeploymentsRequest" + }, + "output" : { + "shape" : "ResetDeploymentsResponse", + "documentation" : "Success. The group's deployments were reset." + }, + "errors" : [ { + "shape" : "BadRequestException", + "documentation" : "invalid request" + } ], + "documentation" : "Resets a group's deployments." + }, "UpdateConnectivityInfo" : { "name" : "UpdateConnectivityInfo", "http" : { @@ -1334,7 +1354,7 @@ }, "Message" : { "shape" : "__string", - "documentation" : "Message" + "documentation" : "Message containing information about the error" } }, "documentation" : "General Error", @@ -1508,7 +1528,7 @@ }, "DeploymentType" : { "shape" : "DeploymentType", - "documentation" : "Type of deployment" + "documentation" : "Type of deployment. When used in CreateDeployment, only NewDeployment and Redeployment are valid. " }, "GroupId" : { "shape" : "__string", @@ -1528,11 +1548,11 @@ "members" : { "DeploymentArn" : { "shape" : "__string", - "documentation" : "Arn of the deployment." + "documentation" : "The arn of the deployment." }, "DeploymentId" : { "shape" : "__string", - "documentation" : "Id of the deployment." + "documentation" : "The id of the deployment." } } }, @@ -2200,6 +2220,10 @@ "shape" : "__string", "documentation" : "Id of the deployment." }, + "DeploymentType" : { + "shape" : "DeploymentType", + "documentation" : "The type of deployment." + }, "GroupArn" : { "shape" : "__string", "documentation" : "Arn of the group for this deployment." @@ -2209,7 +2233,7 @@ }, "DeploymentType" : { "type" : "string", - "enum" : [ "NewDeployment", "Redeployment" ] + "enum" : [ "NewDeployment", "Redeployment", "ResetDeployment", "ForceResetDeployment" ] }, "Deployments" : { "type" : "list", @@ -2386,7 +2410,7 @@ }, "Message" : { "shape" : "__string", - "documentation" : "Message" + "documentation" : "Message containing information about the error" } }, "documentation" : "General Error" @@ -2433,7 +2457,7 @@ "members" : { "ConnectivityInfo" : { "shape" : "ListOfConnectivityInfo", - "documentation" : "Connectivity info array" + "documentation" : "Connectivity info list" }, "Message" : { "shape" : "__string", @@ -2555,6 +2579,14 @@ "shape" : "__string", "documentation" : "Status of the deployment." }, + "DeploymentType" : { + "shape" : "DeploymentType", + "documentation" : "The type of the deployment." + }, + "ErrorDetails" : { + "shape" : "ErrorDetails", + "documentation" : "The error Details" + }, "ErrorMessage" : { "shape" : "__string", "documentation" : "Error Message" @@ -2728,7 +2760,8 @@ "documentation" : "Timestamp when the funtion definition version was created." }, "Definition" : { - "shape" : "FunctionDefinitionVersion" + "shape" : "FunctionDefinitionVersion", + "documentation" : "Information on the definition." }, "Id" : { "shape" : "__string", @@ -3149,7 +3182,7 @@ "documentation" : "Name of a group." } }, - "documentation" : "Information of a group" + "documentation" : "Information on the group" }, "GroupVersion" : { "type" : "structure", @@ -3186,7 +3219,7 @@ }, "Message" : { "shape" : "__string", - "documentation" : "Message" + "documentation" : "Message containing information about the error" } }, "documentation" : "General Error", @@ -3274,7 +3307,7 @@ "documentation" : "The token for the next set of results, or ''null'' if there are no additional results." } }, - "documentation" : "List of definition response" + "documentation" : "List of definition responses" }, "ListDeploymentsRequest" : { "type" : "structure", @@ -3305,7 +3338,7 @@ "members" : { "Deployments" : { "shape" : "Deployments", - "documentation" : "Information on deployments" + "documentation" : "List of deployments for the requested groups" }, "NextToken" : { "shape" : "__string", @@ -3800,6 +3833,42 @@ "shape" : "__string" } }, + "ResetDeploymentsRequest" : { + "type" : "structure", + "members" : { + "AmznClientToken" : { + "shape" : "__string", + "location" : "header", + "locationName" : "X-Amzn-Client-Token", + "documentation" : "The client token used to request idempotent operations." + }, + "Force" : { + "shape" : "__boolean", + "documentation" : "When set to true, perform a best-effort only core reset." + }, + "GroupId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "GroupId", + "documentation" : "The unique Id of the AWS Greengrass Group" + } + }, + "documentation" : "Information needed to perform a reset of a group's deployments.", + "required" : [ "GroupId" ] + }, + "ResetDeploymentsResponse" : { + "type" : "structure", + "members" : { + "DeploymentArn" : { + "shape" : "__string", + "documentation" : "The arn of the reset deployment." + }, + "DeploymentId" : { + "shape" : "__string", + "documentation" : "The id of the reset deployment." + } + } + }, "Subscription" : { "type" : "structure", "members" : { @@ -3837,7 +3906,7 @@ "members" : { "ConnectivityInfo" : { "shape" : "ListOfConnectivityInfo", - "documentation" : "Connectivity info array" + "documentation" : "Connectivity info list" }, "ThingName" : { "shape" : "__string", diff --git a/services/iam/src/main/resources/codegen-resources/service-2.json b/services/iam/src/main/resources/codegen-resources/service-2.json index f2e1347ecd98..10afd344a643 100644 --- a/services/iam/src/main/resources/codegen-resources/service-2.json +++ b/services/iam/src/main/resources/codegen-resources/service-2.json @@ -68,6 +68,7 @@ {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, {"shape":"InvalidInputException"}, + {"shape":"PolicyNotAttachableException"}, {"shape":"ServiceFailureException"} ], "documentation":"

Attaches the specified managed policy to the specified IAM group.

You use this API to attach a managed policy to a group. To embed an inline policy in a group, use PutGroupPolicy.

For more information about policies, see Managed Policies and Inline Policies in the IAM User Guide.

" @@ -84,6 +85,7 @@ {"shape":"LimitExceededException"}, {"shape":"InvalidInputException"}, {"shape":"UnmodifiableEntityException"}, + {"shape":"PolicyNotAttachableException"}, {"shape":"ServiceFailureException"} ], "documentation":"

Attaches the specified managed policy to the specified IAM role. When you attach a managed policy to a role, the managed policy becomes part of the role's permission (access) policy.

You cannot use a managed policy as the role's trust policy. The role's trust policy is created at the same time as the role, using CreateRole. You can update a role's trust policy using UpdateAssumeRolePolicy.

Use this API to attach a managed policy to a role. To embed an inline policy in a role, use PutRolePolicy. For more information about policies, see Managed Policies and Inline Policies in the IAM User Guide.

" @@ -99,6 +101,7 @@ {"shape":"NoSuchEntityException"}, {"shape":"LimitExceededException"}, {"shape":"InvalidInputException"}, + {"shape":"PolicyNotAttachableException"}, {"shape":"ServiceFailureException"} ], "documentation":"

Attaches the specified managed policy to the specified user.

You use this API to attach a managed policy to a user. To embed an inline policy in a user, use PutUserPolicy.

For more information about policies, see Managed Policies and Inline Policies in the IAM User Guide.

" @@ -615,6 +618,24 @@ ], "documentation":"

Deletes the specified server certificate.

For more information about working with server certificates, including a list of AWS services that can use the server certificates that you manage with IAM, go to Working with Server Certificates in the IAM User Guide.

If you are using a server certificate with Elastic Load Balancing, deleting the certificate could have implications for your application. If Elastic Load Balancing doesn't detect the deletion of bound certificates, it may continue to use the certificates. This could cause Elastic Load Balancing to stop accepting traffic. We recommend that you remove the reference to the certificate from Elastic Load Balancing before using this command to delete the certificate. For more information, go to DeleteLoadBalancerListeners in the Elastic Load Balancing API Reference.

" }, + "DeleteServiceLinkedRole":{ + "name":"DeleteServiceLinkedRole", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteServiceLinkedRoleRequest"}, + "output":{ + "shape":"DeleteServiceLinkedRoleResponse", + "resultWrapper":"DeleteServiceLinkedRoleResult" + }, + "errors":[ + {"shape":"NoSuchEntityException"}, + {"shape":"LimitExceededException"}, + {"shape":"ServiceFailureException"} + ], + "documentation":"

Submits a service-linked role deletion request and returns a DeletionTaskId, which you can use to check the status of the deletion. Before you call this operation, confirm that the role has no active sessions and that any resources used by the role in the linked service are deleted. If you call this operation more than once for the same service-linked role and an earlier deletion task is not complete, then the DeletionTaskId of the earlier request is returned.

If you submit a deletion request for a service-linked role whose linked service is still accessing a resource, then the deletion task fails. If it fails, the GetServiceLinkedRoleDeletionStatus API operation returns the reason for the failure, including the resources that must be deleted. To delete the service-linked role, you must first remove those resources from the linked service and then submit the deletion request again. Resources are specific to the service that is linked to the role. For more information about removing resources from a service, see the AWS documentation for your service.

For more information about service-linked roles, see Roles Terms and Concepts: AWS Service-Linked Role in the IAM User Guide.

" + }, "DeleteServiceSpecificCredential":{ "name":"DeleteServiceSpecificCredential", "http":{ @@ -1086,6 +1107,24 @@ ], "documentation":"

Retrieves information about the specified server certificate stored in IAM.

For more information about working with server certificates, including a list of AWS services that can use the server certificates that you manage with IAM, go to Working with Server Certificates in the IAM User Guide.

" }, + "GetServiceLinkedRoleDeletionStatus":{ + "name":"GetServiceLinkedRoleDeletionStatus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetServiceLinkedRoleDeletionStatusRequest"}, + "output":{ + "shape":"GetServiceLinkedRoleDeletionStatusResponse", + "resultWrapper":"GetServiceLinkedRoleDeletionStatusResult" + }, + "errors":[ + {"shape":"NoSuchEntityException"}, + {"shape":"InvalidInputException"}, + {"shape":"ServiceFailureException"} + ], + "documentation":"

Retrieves the status of your service-linked role deletion. After you use the DeleteServiceLinkedRole API operation to submit a service-linked role for deletion, you can use the DeletionTaskId parameter in GetServiceLinkedRoleDeletionStatus to check the status of the deletion. If the deletion fails, this operation returns the reason that it failed.

" + }, "GetUser":{ "name":"GetUser", "http":{ @@ -2103,6 +2142,10 @@ } } }, + "ArnListType":{ + "type":"list", + "member":{"shape":"arnType"} + }, "AttachGroupPolicyRequest":{ "type":"structure", "required":[ @@ -2392,7 +2435,7 @@ "members":{ "PolicyName":{ "shape":"policyNameType", - "documentation":"

The friendly name of the policy.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The friendly name of the policy.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" }, "Path":{ "shape":"policyPathType", @@ -2720,7 +2763,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name identifying the policy document to delete.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name identifying the policy document to delete.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -2804,7 +2847,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the inline policy to delete from the specified IAM role.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the inline policy to delete from the specified IAM role.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -2855,6 +2898,26 @@ } } }, + "DeleteServiceLinkedRoleRequest":{ + "type":"structure", + "required":["RoleName"], + "members":{ + "RoleName":{ + "shape":"roleNameType", + "documentation":"

The name of the service-linked role to be deleted.

" + } + } + }, + "DeleteServiceLinkedRoleResponse":{ + "type":"structure", + "required":["DeletionTaskId"], + "members":{ + "DeletionTaskId":{ + "shape":"DeletionTaskIdType", + "documentation":"

The deletion task identifier that you can use to check the status of the deletion. This identifier is returned in the format task/aws-service-role/<service-principal-name>/<role-name>/<task-uuid>.

" + } + } + }, "DeleteServiceSpecificCredentialRequest":{ "type":"structure", "required":["ServiceSpecificCredentialId"], @@ -2896,7 +2959,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name identifying the policy document to delete.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name identifying the policy document to delete.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -2920,6 +2983,34 @@ } } }, + "DeletionTaskFailureReasonType":{ + "type":"structure", + "members":{ + "Reason":{ + "shape":"ReasonType", + "documentation":"

A short description of the reason that the service-linked role deletion failed.

" + }, + "RoleUsageList":{ + "shape":"RoleUsageListType", + "documentation":"

A list of objects that contains details about the service-linked role deletion failure. If the service-linked role has active sessions or if any resources that were used by the role have not been deleted from the linked service, the role can't be deleted. This parameter includes a list of the resources that are associated with the role and the region in which the resources are being used.

" + } + }, + "documentation":"

The reason that the service-linked role deletion failed.

This data type is used as a response element in the GetServiceLinkedRoleDeletionStatus operation.

" + }, + "DeletionTaskIdType":{ + "type":"string", + "max":1000, + "min":1 + }, + "DeletionTaskStatusType":{ + "type":"string", + "enum":[ + "SUCCEEDED", + "IN_PROGRESS", + "FAILED", + "NOT_STARTED" + ] + }, "DetachGroupPolicyRequest":{ "type":"structure", "required":[ @@ -3287,7 +3378,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -3492,7 +3583,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -3621,6 +3712,30 @@ }, "documentation":"

Contains the response to a successful GetServerCertificate request.

" }, + "GetServiceLinkedRoleDeletionStatusRequest":{ + "type":"structure", + "required":["DeletionTaskId"], + "members":{ + "DeletionTaskId":{ + "shape":"DeletionTaskIdType", + "documentation":"

The deletion task identifier. This identifier is returned by the DeleteServiceLinkedRole operation in the format task/aws-service-role/<service-principal-name>/<role-name>/<task-uuid>.

" + } + } + }, + "GetServiceLinkedRoleDeletionStatusResponse":{ + "type":"structure", + "required":["Status"], + "members":{ + "Status":{ + "shape":"DeletionTaskStatusType", + "documentation":"

The status of the deletion.

" + }, + "Reason":{ + "shape":"DeletionTaskFailureReasonType", + "documentation":"

An object that contains details about the reason the deletion failed.

" + } + } + }, "GetUserPolicyRequest":{ "type":"structure", "required":[ @@ -3634,7 +3749,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document to get.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" } } }, @@ -4139,7 +4254,7 @@ "members":{ "PolicyNames":{ "shape":"policyNameListType", - "documentation":"

A list of policy names.

" + "documentation":"

A list of policy names.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" }, "IsTruncated":{ "shape":"booleanType", @@ -5046,7 +5161,7 @@ "members":{ "message":{"shape":"policyEvaluationErrorMessage"} }, - "documentation":"

The request failed because a provided policy could not be successfully evaluated. An additional detail message indicates the source of the failure.

", + "documentation":"

The request failed because a provided policy could not be successfully evaluated. An additional detailed message indicates the source of the failure.

", "error":{ "code":"PolicyEvaluation", "httpStatusCode":500 @@ -5072,6 +5187,19 @@ "member":{"shape":"PolicyGroup"} }, "PolicyIdentifierType":{"type":"string"}, + "PolicyNotAttachableException":{ + "type":"structure", + "members":{ + "message":{"shape":"policyNotAttachableMessage"} + }, + "documentation":"

The request failed because AWS service role policies can only be attached to the service-linked role for that service.

", + "error":{ + "code":"PolicyNotAttachable", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "PolicyRole":{ "type":"structure", "members":{ @@ -5170,7 +5298,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" }, "PolicyDocument":{ "shape":"policyDocumentType", @@ -5192,7 +5320,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" }, "PolicyDocument":{ "shape":"policyDocumentType", @@ -5214,7 +5342,7 @@ }, "PolicyName":{ "shape":"policyNameType", - "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-

" + "documentation":"

The name of the policy document.

This parameter allows (per its regex pattern) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: =,.@-+

" }, "PolicyDocument":{ "shape":"policyDocumentType", @@ -5222,6 +5350,15 @@ } } }, + "ReasonType":{ + "type":"string", + "max":1000 + }, + "RegionNameType":{ + "type":"string", + "max":100, + "min":1 + }, "RemoveClientIDFromOpenIDConnectProviderRequest":{ "type":"structure", "required":[ @@ -5465,6 +5602,24 @@ }, "documentation":"

Contains information about an IAM role, including all of the role's policies.

This data type is used as a response element in the GetAccountAuthorizationDetails action.

" }, + "RoleUsageListType":{ + "type":"list", + "member":{"shape":"RoleUsageType"} + }, + "RoleUsageType":{ + "type":"structure", + "members":{ + "Region":{ + "shape":"RegionNameType", + "documentation":"

The name of the region where the service-linked role is being used.

" + }, + "Resources":{ + "shape":"ArnListType", + "documentation":"

The name of the resource that is using the service-linked role.

" + } + }, + "documentation":"

An object that contains details about how a service-linked role is used.

This data type is used as a response element in the GetServiceLinkedRoleDeletionStatus operation.

" + }, "SAMLMetadataDocumentType":{ "type":"string", "max":10000000, @@ -6369,7 +6524,7 @@ }, "PasswordLastUsed":{ "shape":"dateType", - "documentation":"

The date and time, in ISO 8601 date-time format, when the user's password was last used to sign in to an AWS website. For a list of AWS websites that capture a user's last sign-in time, see the Credential Reports topic in the Using IAM guide. If a password is used more than once in a five-minute span, only the first use is returned in this field. This field is null (not present) when:

This value is returned only in the GetUser and ListUsers actions.

" + "documentation":"

The date and time, in ISO 8601 date-time format, when the user's password was last used to sign in to an AWS website. For a list of AWS websites that capture a user's last sign-in time, see the Credential Reports topic in the Using IAM guide. If a password is used more than once in a five-minute span, only the first use is returned in this field. If the field is null (no value) then it indicates that they never signed in with a password. This can be because:

A null does not mean that the user never had a password. Also, if the user does not currently have a password, but had one in the past, then this field contains the date and time the most recent password was used.

This value is returned only in the GetUser and ListUsers actions.

" } }, "documentation":"

Contains information about an IAM user entity.

This data type is used as a response element in the following actions:

" @@ -6688,6 +6843,7 @@ "min":1, "pattern":"[\\w+=,.@-]+" }, + "policyNotAttachableMessage":{"type":"string"}, "policyPathType":{ "type":"string", "pattern":"((/[A-Za-z0-9\\.,\\+@=_-]+)*)/" diff --git a/services/inspector/src/main/resources/codegen-resources/service-2.json b/services/inspector/src/main/resources/codegen-resources/service-2.json index 3c9b3e408808..a93d0e72b9ee 100644 --- a/services/inspector/src/main/resources/codegen-resources/service-2.json +++ b/services/inspector/src/main/resources/codegen-resources/service-2.json @@ -1027,7 +1027,8 @@ "FAILED", "ERROR", "COMPLETED", - "COMPLETED_WITH_ERRORS" + "COMPLETED_WITH_ERRORS", + "CANCELED" ] }, "AssessmentRunStateChange":{ @@ -1330,7 +1331,7 @@ }, "userAttributesForFindings":{ "shape":"UserAttributeList", - "documentation":"

The user-defined attributes that are assigned to every finding that is generated by the assessment run that uses this assessment template.

" + "documentation":"

The user-defined attributes that are assigned to every finding that is generated by the assessment run that uses this assessment template. An attribute is a key and value pair (an Attribute object). Within an assessment template, each key must be unique.

" } } }, @@ -2669,6 +2670,13 @@ } } }, + "StopAction":{ + "type":"string", + "enum":[ + "START_EVALUATION", + "SKIP_EVALUATION" + ] + }, "StopAssessmentRunRequest":{ "type":"structure", "required":["assessmentRunArn"], @@ -2676,6 +2684,10 @@ "assessmentRunArn":{ "shape":"Arn", "documentation":"

The ARN of the assessment run that you want to stop.

" + }, + "stopAction":{ + "shape":"StopAction", + "documentation":"

An input option that can be set to either START_EVALUATION or SKIP_EVALUATION. START_EVALUATION (the default value), stops the AWS agent from collecting data and begins the results evaluation and the findings generation process. SKIP_EVALUATION cancels the assessment run immediately, after which no findings are generated.

" } } }, diff --git a/services/kinesis/src/main/resources/codegen-resources/kinesis/service-2.json b/services/kinesis/src/main/resources/codegen-resources/kinesis/service-2.json index ec0d26ed3986..67f9b5aaf749 100644 --- a/services/kinesis/src/main/resources/codegen-resources/kinesis/service-2.json +++ b/services/kinesis/src/main/resources/codegen-resources/kinesis/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"Kinesis", "serviceFullName":"Amazon Kinesis", + "serviceId":"Kinesis", "signatureVersion":"v4", "targetPrefix":"Kinesis_20131202", "uid":"kinesis-2013-12-02" @@ -25,7 +26,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Adds or updates tags for the specified Amazon Kinesis stream. Each stream can have up to 10 tags.

If tags have already been assigned to the stream, AddTagsToStream overwrites any existing tags that correspond to the specified tag keys.

" + "documentation":"

Adds or updates tags for the specified Kinesis stream. Each stream can have up to 10 tags.

If tags have already been assigned to the stream, AddTagsToStream overwrites any existing tags that correspond to the specified tag keys.

" }, "CreateStream":{ "name":"CreateStream", @@ -39,7 +40,7 @@ {"shape":"LimitExceededException"}, {"shape":"InvalidArgumentException"} ], - "documentation":"

Creates an Amazon Kinesis stream. A stream captures and transports data records that are continuously emitted from different data sources or producers. Scale-out within a stream is explicitly supported by means of shards, which are uniquely identified groups of data records in a stream.

You specify and control the number of shards that a stream is composed of. Each shard can support reads up to 5 transactions per second, up to a maximum data read total of 2 MB per second. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second. You can add shards to a stream if the amount of data input increases and you can remove shards if the amount of data input decreases.

The stream name identifies the stream. The name is scoped to the AWS account used by the application. It is also scoped by region. That is, two streams in two different accounts can have the same name, and two streams in the same account, but in two different regions, can have the same name.

CreateStream is an asynchronous operation. Upon receiving a CreateStream request, Amazon Kinesis immediately returns and sets the stream status to CREATING. After the stream is created, Amazon Kinesis sets the stream status to ACTIVE. You should perform read and write operations only on an ACTIVE stream.

You receive a LimitExceededException when making a CreateStream request if you try to do one of the following:

For the default shard limit for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. If you need to increase this limit, contact AWS Support.

You can use DescribeStream to check the stream status, which is returned in StreamStatus.

CreateStream has a limit of 5 transactions per second per account.

" + "documentation":"

Creates a Kinesis stream. A stream captures and transports data records that are continuously emitted from different data sources or producers. Scale-out within a stream is explicitly supported by means of shards, which are uniquely identified groups of data records in a stream.

You specify and control the number of shards that a stream is composed of. Each shard can support reads up to 5 transactions per second, up to a maximum data read total of 2 MB per second. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second. I the amount of data input increases or decreases, you can add or remove shards.

The stream name identifies the stream. The name is scoped to the AWS account used by the application. It is also scoped by region. That is, two streams in two different accounts can have the same name, and two streams in the same account, but in two different regions, can have the same name.

CreateStream is an asynchronous operation. Upon receiving a CreateStream request, Kinesis Streams immediately returns and sets the stream status to CREATING. After the stream is created, Kinesis Streams sets the stream status to ACTIVE. You should perform read and write operations only on an ACTIVE stream.

You receive a LimitExceededException when making a CreateStream request when you try to do one of the following:

For the default shard limit for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. To increase this limit, contact AWS Support.

You can use DescribeStream to check the stream status, which is returned in StreamStatus.

CreateStream has a limit of 5 transactions per second per account.

" }, "DecreaseStreamRetentionPeriod":{ "name":"DecreaseStreamRetentionPeriod", @@ -53,7 +54,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidArgumentException"} ], - "documentation":"

Decreases the Amazon Kinesis stream's retention period, which is the length of time data records are accessible after they are added to the stream. The minimum value of a stream's retention period is 24 hours.

This operation may result in lost data. For example, if the stream's retention period is 48 hours and is decreased to 24 hours, any data already in the stream that is older than 24 hours is inaccessible.

" + "documentation":"

Decreases the Kinesis stream's retention period, which is the length of time data records are accessible after they are added to the stream. The minimum value of a stream's retention period is 24 hours.

This operation may result in lost data. For example, if the stream's retention period is 48 hours and is decreased to 24 hours, any data already in the stream that is older than 24 hours is inaccessible.

" }, "DeleteStream":{ "name":"DeleteStream", @@ -66,7 +67,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Deletes an Amazon Kinesis stream and all its shards and data. You must shut down any applications that are operating on the stream before you delete the stream. If an application attempts to operate on a deleted stream, it will receive the exception ResourceNotFoundException.

If the stream is in the ACTIVE state, you can delete it. After a DeleteStream request, the specified stream is in the DELETING state until Amazon Kinesis completes the deletion.

Note: Amazon Kinesis might continue to accept data read and write operations, such as PutRecord, PutRecords, and GetRecords, on a stream in the DELETING state until the stream deletion is complete.

When you delete a stream, any shards in that stream are also deleted, and any tags are dissociated from the stream.

You can use the DescribeStream operation to check the state of the stream, which is returned in StreamStatus.

DeleteStream has a limit of 5 transactions per second per account.

" + "documentation":"

Deletes a Kinesis stream and all its shards and data. You must shut down any applications that are operating on the stream before you delete the stream. If an application attempts to operate on a deleted stream, it receives the exception ResourceNotFoundException.

If the stream is in the ACTIVE state, you can delete it. After a DeleteStream request, the specified stream is in the DELETING state until Kinesis Streams completes the deletion.

Note: Kinesis Streams might continue to accept data read and write operations, such as PutRecord, PutRecords, and GetRecords, on a stream in the DELETING state until the stream deletion is complete.

When you delete a stream, any shards in that stream are also deleted, and any tags are dissociated from the stream.

You can use the DescribeStream operation to check the state of the stream, which is returned in StreamStatus.

DeleteStream has a limit of 5 transactions per second per account.

" }, "DescribeLimits":{ "name":"DescribeLimits", @@ -93,7 +94,21 @@ {"shape":"ResourceNotFoundException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Describes the specified Amazon Kinesis stream.

The information returned includes the stream name, Amazon Resource Name (ARN), creation time, enhanced metric configuration, and shard map. The shard map is an array of shard objects. For each shard object, there is the hash key and sequence number ranges that the shard spans, and the IDs of any earlier shards that played in a role in creating the shard. Every record ingested in the stream is identified by a sequence number, which is assigned when the record is put into the stream.

You can limit the number of shards returned by each call. For more information, see Retrieving Shards from a Stream in the Amazon Kinesis Streams Developer Guide.

There are no guarantees about the chronological order shards returned. To process shards in chronological order, use the ID of the parent shard to track the lineage to the oldest shard.

This operation has a limit of 10 transactions per second per account.

" + "documentation":"

Describes the specified Kinesis stream.

The information returned includes the stream name, Amazon Resource Name (ARN), creation time, enhanced metric configuration, and shard map. The shard map is an array of shard objects. For each shard object, there is the hash key and sequence number ranges that the shard spans, and the IDs of any earlier shards that played in a role in creating the shard. Every record ingested in the stream is identified by a sequence number, which is assigned when the record is put into the stream.

You can limit the number of shards returned by each call. For more information, see Retrieving Shards from a Stream in the Amazon Kinesis Streams Developer Guide.

There are no guarantees about the chronological order shards returned. To process shards in chronological order, use the ID of the parent shard to track the lineage to the oldest shard.

This operation has a limit of 10 transactions per second per account.

" + }, + "DescribeStreamSummary":{ + "name":"DescribeStreamSummary", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeStreamSummaryInput"}, + "output":{"shape":"DescribeStreamSummaryOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Provides a summarized description of the specified Kinesis stream without the shard list.

The information returned includes the stream name, Amazon Resource Name (ARN), status, record retention period, approximate creation time, monitoring, encryption details, and open shard count.

" }, "DisableEnhancedMonitoring":{ "name":"DisableEnhancedMonitoring", @@ -125,7 +140,7 @@ {"shape":"ResourceInUseException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"

Enables enhanced Amazon Kinesis stream monitoring for shard-level metrics.

" + "documentation":"

Enables enhanced Kinesis stream monitoring for shard-level metrics.

" }, "GetRecords":{ "name":"GetRecords", @@ -139,9 +154,15 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidArgumentException"}, {"shape":"ProvisionedThroughputExceededException"}, - {"shape":"ExpiredIteratorException"} + {"shape":"ExpiredIteratorException"}, + {"shape":"KMSDisabledException"}, + {"shape":"KMSInvalidStateException"}, + {"shape":"KMSAccessDeniedException"}, + {"shape":"KMSNotFoundException"}, + {"shape":"KMSOptInRequired"}, + {"shape":"KMSThrottlingException"} ], - "documentation":"

Gets data records from an Amazon Kinesis stream's shard.

Specify a shard iterator using the ShardIterator parameter. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. If there are no records available in the portion of the shard that the iterator points to, GetRecords returns an empty list. Note that it might take multiple calls to get to a portion of the shard that contains records.

You can scale by provisioning multiple shards per stream while considering service limits (for more information, see Streams Limits in the Amazon Kinesis Streams Developer Guide). Your application should have one thread per shard, each reading continuously from its stream. To read from a stream continually, call GetRecords in a loop. Use GetShardIterator to get the shard iterator to specify in the first GetRecords call. GetRecords returns a new shard iterator in NextShardIterator. Specify the shard iterator returned in NextShardIterator in subsequent calls to GetRecords. Note that if the shard has been closed, the shard iterator can't return more data and GetRecords returns null in NextShardIterator. You can terminate the loop when the shard is closed, or when the shard iterator reaches the record with the sequence number or other attribute that marks it as the last record to process.

Each data record can be up to 1 MB in size, and each shard can read up to 2 MB per second. You can ensure that your calls don't exceed the maximum supported size or throughput by using the Limit parameter to specify the maximum number of records that GetRecords can return. Consider your average record size when determining this limit.

The size of the data returned by GetRecords varies depending on the utilization of the shard. The maximum size of data that GetRecords can return is 10 MB. If a call returns this amount of data, subsequent calls made within the next 5 seconds throw ProvisionedThroughputExceededException. If there is insufficient provisioned throughput on the shard, subsequent calls made within the next 1 second throw ProvisionedThroughputExceededException. Note that GetRecords won't return any data when it throws an exception. For this reason, we recommend that you wait one second between calls to GetRecords; however, it's possible that the application will get exceptions for longer than 1 second.

To detect whether the application is falling behind in processing, you can use the MillisBehindLatest response attribute. You can also monitor the stream using CloudWatch metrics and other mechanisms (see Monitoring in the Amazon Kinesis Streams Developer Guide).

Each Amazon Kinesis record includes a value, ApproximateArrivalTimestamp, that is set when a stream successfully receives and stores a record. This is commonly referred to as a server-side timestamp, whereas a client-side timestamp is set when a data producer creates or sends the record to a stream (a data producer is any data source putting data records into a stream, for example with PutRecords). The timestamp has millisecond precision. There are no guarantees about the timestamp accuracy, or that the timestamp is always increasing. For example, records in a shard or across a stream might have timestamps that are out of order.

" + "documentation":"

Gets data records from a Kinesis stream's shard.

Specify a shard iterator using the ShardIterator parameter. The shard iterator specifies the position in the shard from which you want to start reading data records sequentially. If there are no records available in the portion of the shard that the iterator points to, GetRecords returns an empty list. It might take multiple calls to get to a portion of the shard that contains records.

You can scale by provisioning multiple shards per stream while considering service limits (for more information, see Streams Limits in the Amazon Kinesis Streams Developer Guide). Your application should have one thread per shard, each reading continuously from its stream. To read from a stream continually, call GetRecords in a loop. Use GetShardIterator to get the shard iterator to specify in the first GetRecords call. GetRecords returns a new shard iterator in NextShardIterator. Specify the shard iterator returned in NextShardIterator in subsequent calls to GetRecords. If the shard has been closed, the shard iterator can't return more data and GetRecords returns null in NextShardIterator. You can terminate the loop when the shard is closed, or when the shard iterator reaches the record with the sequence number or other attribute that marks it as the last record to process.

Each data record can be up to 1 MB in size, and each shard can read up to 2 MB per second. You can ensure that your calls don't exceed the maximum supported size or throughput by using the Limit parameter to specify the maximum number of records that GetRecords can return. Consider your average record size when determining this limit.

The size of the data returned by GetRecords varies depending on the utilization of the shard. The maximum size of data that GetRecords can return is 10 MB. If a call returns this amount of data, subsequent calls made within the next 5 seconds throw ProvisionedThroughputExceededException. If there is insufficient provisioned throughput on the shard, subsequent calls made within the next 1 second throw ProvisionedThroughputExceededException. GetRecords won't return any data when it throws an exception. For this reason, we recommend that you wait one second between calls to GetRecords; however, it's possible that the application will get exceptions for longer than 1 second.

To detect whether the application is falling behind in processing, you can use the MillisBehindLatest response attribute. You can also monitor the stream using CloudWatch metrics and other mechanisms (see Monitoring in the Amazon Kinesis Streams Developer Guide).

Each Amazon Kinesis record includes a value, ApproximateArrivalTimestamp, that is set when a stream successfully receives and stores a record. This is commonly referred to as a server-side time stamp, whereas a client-side time stamp is set when a data producer creates or sends the record to a stream (a data producer is any data source putting data records into a stream, for example with PutRecords). The time stamp has millisecond precision. There are no guarantees about the time stamp accuracy, or that the time stamp is always increasing. For example, records in a shard or across a stream might have time stamps that are out of order.

" }, "GetShardIterator":{ "name":"GetShardIterator", @@ -156,7 +177,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"ProvisionedThroughputExceededException"} ], - "documentation":"

Gets an Amazon Kinesis shard iterator. A shard iterator expires five minutes after it is returned to the requester.

A shard iterator specifies the shard position from which to start reading data records sequentially. The position is specified using the sequence number of a data record in a shard. A sequence number is the identifier associated with every record ingested in the stream, and is assigned when a record is put into the stream. Each stream has one or more shards.

You must specify the shard iterator type. For example, you can set the ShardIteratorType parameter to read exactly from the position denoted by a specific sequence number by using the AT_SEQUENCE_NUMBER shard iterator type, or right after the sequence number by using the AFTER_SEQUENCE_NUMBER shard iterator type, using sequence numbers returned by earlier calls to PutRecord, PutRecords, GetRecords, or DescribeStream. In the request, you can specify the shard iterator type AT_TIMESTAMP to read records from an arbitrary point in time, TRIM_HORIZON to cause ShardIterator to point to the last untrimmed record in the shard in the system (the oldest data record in the shard), or LATEST so that you always read the most recent data in the shard.

When you read repeatedly from a stream, use a GetShardIterator request to get the first shard iterator for use in your first GetRecords request and for subsequent reads use the shard iterator returned by the GetRecords request in NextShardIterator. A new shard iterator is returned by every GetRecords request in NextShardIterator, which you use in the ShardIterator parameter of the next GetRecords request.

If a GetShardIterator request is made too often, you receive a ProvisionedThroughputExceededException. For more information about throughput limits, see GetRecords, and Streams Limits in the Amazon Kinesis Streams Developer Guide.

If the shard is closed, GetShardIterator returns a valid iterator for the last sequence number of the shard. Note that a shard can be closed as a result of using SplitShard or MergeShards.

GetShardIterator has a limit of 5 transactions per second per account per open shard.

" + "documentation":"

Gets an Amazon Kinesis shard iterator. A shard iterator expires five minutes after it is returned to the requester.

A shard iterator specifies the shard position from which to start reading data records sequentially. The position is specified using the sequence number of a data record in a shard. A sequence number is the identifier associated with every record ingested in the stream, and is assigned when a record is put into the stream. Each stream has one or more shards.

You must specify the shard iterator type. For example, you can set the ShardIteratorType parameter to read exactly from the position denoted by a specific sequence number by using the AT_SEQUENCE_NUMBER shard iterator type. Alternatively, the parameter can read right after the sequence number by using the AFTER_SEQUENCE_NUMBER shard iterator type, using sequence numbers returned by earlier calls to PutRecord, PutRecords, GetRecords, or DescribeStream. In the request, you can specify the shard iterator type AT_TIMESTAMP to read records from an arbitrary point in time, TRIM_HORIZON to cause ShardIterator to point to the last untrimmed record in the shard in the system (the oldest data record in the shard), or LATEST so that you always read the most recent data in the shard.

When you read repeatedly from a stream, use a GetShardIterator request to get the first shard iterator for use in your first GetRecords request and for subsequent reads use the shard iterator returned by the GetRecords request in NextShardIterator. A new shard iterator is returned by every GetRecords request in NextShardIterator, which you use in the ShardIterator parameter of the next GetRecords request.

If a GetShardIterator request is made too often, you receive a ProvisionedThroughputExceededException. For more information about throughput limits, see GetRecords, and Streams Limits in the Amazon Kinesis Streams Developer Guide.

If the shard is closed, GetShardIterator returns a valid iterator for the last sequence number of the shard. A shard can be closed as a result of using SplitShard or MergeShards.

GetShardIterator has a limit of 5 transactions per second per account per open shard.

" }, "IncreaseStreamRetentionPeriod":{ "name":"IncreaseStreamRetentionPeriod", @@ -170,7 +191,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidArgumentException"} ], - "documentation":"

Increases the Amazon Kinesis stream's retention period, which is the length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours (7 days).

Upon choosing a longer stream retention period, this operation will increase the time period records are accessible that have not yet expired. However, it will not make previous data that has expired (older than the stream's previous retention period) accessible after the operation has been called. For example, if a stream's retention period is set to 24 hours and is increased to 168 hours, any data that is older than 24 hours will remain inaccessible to consumer applications.

" + "documentation":"

Increases the Amazon Kinesis stream's retention period, which is the length of time data records are accessible after they are added to the stream. The maximum value of a stream's retention period is 168 hours (7 days).

If you choose a longer stream retention period, this operation increases the time period during which records that have not yet expired are accessible. However, it does not make previous, expired data (older than the stream's previous retention period) accessible after the operation has been called. For example, if a stream's retention period is set to 24 hours and is increased to 168 hours, any data that is older than 24 hours remains inaccessible to consumer applications.

" }, "ListStreams":{ "name":"ListStreams", @@ -183,7 +204,7 @@ "errors":[ {"shape":"LimitExceededException"} ], - "documentation":"

Lists your Amazon Kinesis streams.

The number of streams may be too large to return from a single call to ListStreams. You can limit the number of returned streams using the Limit parameter. If you do not specify a value for the Limit parameter, Amazon Kinesis uses the default limit, which is currently 10.

You can detect if there are more streams available to list by using the HasMoreStreams flag from the returned output. If there are more streams available, you can request more streams by using the name of the last stream returned by the ListStreams request in the ExclusiveStartStreamName parameter in a subsequent request to ListStreams. The group of stream names returned by the subsequent request is then added to the list. You can continue this process until all the stream names have been collected in the list.

ListStreams has a limit of 5 transactions per second per account.

" + "documentation":"

Lists your Kinesis streams.

The number of streams may be too large to return from a single call to ListStreams. You can limit the number of returned streams using the Limit parameter. If you do not specify a value for the Limit parameter, Kinesis Streams uses the default limit, which is currently 10.

You can detect if there are more streams available to list by using the HasMoreStreams flag from the returned output. If there are more streams available, you can request more streams by using the name of the last stream returned by the ListStreams request in the ExclusiveStartStreamName parameter in a subsequent request to ListStreams. The group of stream names returned by the subsequent request is then added to the list. You can continue this process until all the stream names have been collected in the list.

ListStreams has a limit of 5 transactions per second per account.

" }, "ListTagsForStream":{ "name":"ListTagsForStream", @@ -198,7 +219,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Lists the tags for the specified Amazon Kinesis stream.

" + "documentation":"

Lists the tags for the specified Kinesis stream.

" }, "MergeShards":{ "name":"MergeShards", @@ -213,7 +234,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Merges two adjacent shards in an Amazon Kinesis stream and combines them into a single shard to reduce the stream's capacity to ingest and transport data. Two shards are considered adjacent if the union of the hash key ranges for the two shards form a contiguous set with no gaps. For example, if you have two shards, one with a hash key range of 276...381 and the other with a hash key range of 382...454, then you could merge these two shards into a single shard that would have a hash key range of 276...454. After the merge, the single child shard receives data for all hash key values covered by the two parent shards.

MergeShards is called when there is a need to reduce the overall capacity of a stream because of excess capacity that is not being used. You must specify the shard to be merged and the adjacent shard for a stream. For more information about merging shards, see Merge Two Shards in the Amazon Kinesis Streams Developer Guide.

If the stream is in the ACTIVE state, you can call MergeShards. If a stream is in the CREATING, UPDATING, or DELETING state, MergeShards returns a ResourceInUseException. If the specified stream does not exist, MergeShards returns a ResourceNotFoundException.

You can use DescribeStream to check the state of the stream, which is returned in StreamStatus.

MergeShards is an asynchronous operation. Upon receiving a MergeShards request, Amazon Kinesis immediately returns a response and sets the StreamStatus to UPDATING. After the operation is completed, Amazon Kinesis sets the StreamStatus to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You use DescribeStream to determine the shard IDs that are specified in the MergeShards request.

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you will receive a LimitExceededException.

MergeShards has limit of 5 transactions per second per account.

" + "documentation":"

Merges two adjacent shards in a Kinesis stream and combines them into a single shard to reduce the stream's capacity to ingest and transport data. Two shards are considered adjacent if the union of the hash key ranges for the two shards form a contiguous set with no gaps. For example, if you have two shards, one with a hash key range of 276...381 and the other with a hash key range of 382...454, then you could merge these two shards into a single shard that would have a hash key range of 276...454. After the merge, the single child shard receives data for all hash key values covered by the two parent shards.

MergeShards is called when there is a need to reduce the overall capacity of a stream because of excess capacity that is not being used. You must specify the shard to be merged and the adjacent shard for a stream. For more information about merging shards, see Merge Two Shards in the Amazon Kinesis Streams Developer Guide.

If the stream is in the ACTIVE state, you can call MergeShards. If a stream is in the CREATING, UPDATING, or DELETING state, MergeShards returns a ResourceInUseException. If the specified stream does not exist, MergeShards returns a ResourceNotFoundException.

You can use DescribeStream to check the state of the stream, which is returned in StreamStatus.

MergeShards is an asynchronous operation. Upon receiving a MergeShards request, Amazon Kinesis immediately returns a response and sets the StreamStatus to UPDATING. After the operation is completed, Amazon Kinesis sets the StreamStatus to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You use DescribeStream to determine the shard IDs that are specified in the MergeShards request.

If you try to operate on too many streams in parallel using CreateStream, DeleteStream, MergeShards or SplitShard, you will receive a LimitExceededException.

MergeShards has a limit of 5 transactions per second per account.

" }, "PutRecord":{ "name":"PutRecord", @@ -226,9 +247,15 @@ "errors":[ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidArgumentException"}, - {"shape":"ProvisionedThroughputExceededException"} + {"shape":"ProvisionedThroughputExceededException"}, + {"shape":"KMSDisabledException"}, + {"shape":"KMSInvalidStateException"}, + {"shape":"KMSAccessDeniedException"}, + {"shape":"KMSNotFoundException"}, + {"shape":"KMSOptInRequired"}, + {"shape":"KMSThrottlingException"} ], - "documentation":"

Writes a single data record into an Amazon Kinesis stream. Call PutRecord to send data into the stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second.

You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis to distribute data across shards. Amazon Kinesis segregates the data records that belong to a stream into multiple shards, using the partition key associated with each data record to determine which shard a given data record belongs to.

Partition keys are Unicode strings, with a maximum length limit of 256 characters for each key. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the ExplicitHashKey parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

PutRecord returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record.

Sequence numbers increase over time and are specific to a shard within a stream, not across all shards within a stream. To guarantee strictly increasing ordering, write serially to a shard and use the SequenceNumberForOrdering parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

If a PutRecord request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, PutRecord throws ProvisionedThroughputExceededException.

Data records are accessible for only 24 hours from the time that they are added to a stream.

" + "documentation":"

Writes a single data record into an Amazon Kinesis stream. Call PutRecord to send data into the stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second.

You must specify the name of the stream that captures, stores, and transports the data; a partition key; and the data blob itself.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Kinesis Streams to distribute data across shards. Kinesis Streams segregates the data records that belong to a stream into multiple shards, using the partition key associated with each data record to determine the shard to which a given data record belongs.

Partition keys are Unicode strings, with a maximum length limit of 256 characters for each key. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards using the hash key ranges of the shards. You can override hashing the partition key to determine the shard by explicitly specifying a hash value using the ExplicitHashKey parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

PutRecord returns the shard ID of where the data record was placed and the sequence number that was assigned to the data record.

Sequence numbers increase over time and are specific to a shard within a stream, not across all shards within a stream. To guarantee strictly increasing ordering, write serially to a shard and use the SequenceNumberForOrdering parameter. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

If a PutRecord request cannot be processed because of insufficient provisioned throughput on the shard involved in the request, PutRecord throws ProvisionedThroughputExceededException.

By default, data records are accessible for 24 hours from the time that they are added to a stream. You can use IncreaseStreamRetentionPeriod or DecreaseStreamRetentionPeriod to modify this retention period.

" }, "PutRecords":{ "name":"PutRecords", @@ -241,9 +268,15 @@ "errors":[ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidArgumentException"}, - {"shape":"ProvisionedThroughputExceededException"} + {"shape":"ProvisionedThroughputExceededException"}, + {"shape":"KMSDisabledException"}, + {"shape":"KMSInvalidStateException"}, + {"shape":"KMSAccessDeniedException"}, + {"shape":"KMSNotFoundException"}, + {"shape":"KMSOptInRequired"}, + {"shape":"KMSThrottlingException"} ], - "documentation":"

Writes multiple data records into an Amazon Kinesis stream in a single call (also referred to as a PutRecords request). Use this operation to send data into the stream for data ingestion and processing.

Each PutRecords request can support up to 500 records. Each record in the request can be as large as 1 MB, up to a limit of 5 MB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second.

You must specify the name of the stream that captures, stores, and transports the data; and an array of request Records, with each record in the array requiring a partition key and data blob. The record size limit applies to the total size of the partition key and data blob.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Amazon Kinesis as input to a hash function that maps the partition key and associated data to a specific shard. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

Each record in the Records array may include an optional parameter, ExplicitHashKey, which overrides the partition key to shard mapping. This parameter allows a data producer to determine explicitly the shard where the record is stored. For more information, see Adding Multiple Records with PutRecords in the Amazon Kinesis Streams Developer Guide.

The PutRecords response includes an array of response Records. Each record in the response array directly correlates with a record in the request array using natural ordering, from the top to the bottom of the request and response. The response Records array always includes the same number of records as the request array.

The response Records array includes both successfully and unsuccessfully processed records. Amazon Kinesis attempts to process all records in each PutRecords request. A single record failure does not stop the processing of subsequent records.

A successfully-processed record includes ShardId and SequenceNumber values. The ShardId parameter identifies the shard in the stream where the record is stored. The SequenceNumber parameter is an identifier assigned to the put record, unique to all records in the stream.

An unsuccessfully-processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error and can be one of the following values: ProvisionedThroughputExceededException or InternalFailure. ErrorMessage provides more detailed information about the ProvisionedThroughputExceededException exception including the account ID, stream name, and shard ID of the record that was throttled. For more information about partially successful responses, see Adding Multiple Records with PutRecords in the Amazon Kinesis Streams Developer Guide.

By default, data records are accessible for only 24 hours from the time that they are added to an Amazon Kinesis stream. This retention period can be modified using the DecreaseStreamRetentionPeriod and IncreaseStreamRetentionPeriod operations.

" + "documentation":"

Writes multiple data records into a Kinesis stream in a single call (also referred to as a PutRecords request). Use this operation to send data into the stream for data ingestion and processing.

Each PutRecords request can support up to 500 records. Each record in the request can be as large as 1 MB, up to a limit of 5 MB for the entire request, including partition keys. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second.

You must specify the name of the stream that captures, stores, and transports the data; and an array of request Records, with each record in the array requiring a partition key and data blob. The record size limit applies to the total size of the partition key and data blob.

The data blob can be any type of data; for example, a segment from a log file, geographic/location data, website clickstream data, and so on.

The partition key is used by Kinesis Streams as input to a hash function that maps the partition key and associated data to a specific shard. An MD5 hash function is used to map partition keys to 128-bit integer values and to map associated data records to shards. As a result of this hashing mechanism, all data records with the same partition key map to the same shard within the stream. For more information, see Adding Data to a Stream in the Amazon Kinesis Streams Developer Guide.

Each record in the Records array may include an optional parameter, ExplicitHashKey, which overrides the partition key to shard mapping. This parameter allows a data producer to determine explicitly the shard where the record is stored. For more information, see Adding Multiple Records with PutRecords in the Amazon Kinesis Streams Developer Guide.

The PutRecords response includes an array of response Records. Each record in the response array directly correlates with a record in the request array using natural ordering, from the top to the bottom of the request and response. The response Records array always includes the same number of records as the request array.

The response Records array includes both successfully and unsuccessfully processed records. Amazon Kinesis attempts to process all records in each PutRecords request. A single record failure does not stop the processing of subsequent records.

A successfully processed record includes ShardId and SequenceNumber values. The ShardId parameter identifies the shard in the stream where the record is stored. The SequenceNumber parameter is an identifier assigned to the put record, unique to all records in the stream.

An unsuccessfully processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error and can be one of the following values: ProvisionedThroughputExceededException or InternalFailure. ErrorMessage provides more detailed information about the ProvisionedThroughputExceededException exception including the account ID, stream name, and shard ID of the record that was throttled. For more information about partially successful responses, see Adding Multiple Records with PutRecords in the Amazon Kinesis Streams Developer Guide.

By default, data records are accessible for 24 hours from the time that they are added to a stream. You can use IncreaseStreamRetentionPeriod or DecreaseStreamRetentionPeriod to modify this retention period.

" }, "RemoveTagsFromStream":{ "name":"RemoveTagsFromStream", @@ -258,7 +291,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Removes tags from the specified Amazon Kinesis stream. Removed tags are deleted and cannot be recovered after this operation successfully completes.

If you specify a tag that does not exist, it is ignored.

" + "documentation":"

Removes tags from the specified Kinesis stream. Removed tags are deleted and cannot be recovered after this operation successfully completes.

If you specify a tag that does not exist, it is ignored.

" }, "SplitShard":{ "name":"SplitShard", @@ -273,7 +306,43 @@ {"shape":"InvalidArgumentException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Splits a shard into two new shards in the Amazon Kinesis stream to increase the stream's capacity to ingest and transport data. SplitShard is called when there is a need to increase the overall capacity of a stream because of an expected increase in the volume of data records being ingested.

You can also use SplitShard when a shard appears to be approaching its maximum utilization; for example, the producers sending data into the specific shard are suddenly sending more than previously anticipated. You can also call SplitShard to increase stream capacity, so that more Amazon Kinesis applications can simultaneously read data from the stream for real-time processing.

You must specify the shard to be split and the new hash key, which is the position in the shard where the shard gets split in two. In many cases, the new hash key might simply be the average of the beginning and ending hash key, but it can be any hash key value in the range being mapped into the shard. For more information about splitting shards, see Split a Shard in the Amazon Kinesis Streams Developer Guide.

You can use DescribeStream to determine the shard ID and hash key values for the ShardToSplit and NewStartingHashKey parameters that are specified in the SplitShard request.

SplitShard is an asynchronous operation. Upon receiving a SplitShard request, Amazon Kinesis immediately returns a response and sets the stream status to UPDATING. After the operation is completed, Amazon Kinesis sets the stream status to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You can use DescribeStream to check the status of the stream, which is returned in StreamStatus. If the stream is in the ACTIVE state, you can call SplitShard. If a stream is in CREATING or UPDATING or DELETING states, DescribeStream returns a ResourceInUseException.

If the specified stream does not exist, DescribeStream returns a ResourceNotFoundException. If you try to create more shards than are authorized for your account, you receive a LimitExceededException.

For the default shard limit for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. If you need to increase this limit, contact AWS Support.

If you try to operate on too many streams simultaneously using CreateStream, DeleteStream, MergeShards, and/or SplitShard, you receive a LimitExceededException.

SplitShard has limit of 5 transactions per second per account.

" + "documentation":"

Splits a shard into two new shards in the Kinesis stream, to increase the stream's capacity to ingest and transport data. SplitShard is called when there is a need to increase the overall capacity of a stream because of an expected increase in the volume of data records being ingested.

You can also use SplitShard when a shard appears to be approaching its maximum utilization; for example, the producers sending data into the specific shard are suddenly sending more than previously anticipated. You can also call SplitShard to increase stream capacity, so that more Kinesis Streams applications can simultaneously read data from the stream for real-time processing.

You must specify the shard to be split and the new hash key, which is the position in the shard where the shard gets split in two. In many cases, the new hash key might be the average of the beginning and ending hash key, but it can be any hash key value in the range being mapped into the shard. For more information, see Split a Shard in the Amazon Kinesis Streams Developer Guide.

You can use DescribeStream to determine the shard ID and hash key values for the ShardToSplit and NewStartingHashKey parameters that are specified in the SplitShard request.

SplitShard is an asynchronous operation. Upon receiving a SplitShard request, Kinesis Streams immediately returns a response and sets the stream status to UPDATING. After the operation is completed, Kinesis Streams sets the stream status to ACTIVE. Read and write operations continue to work while the stream is in the UPDATING state.

You can use DescribeStream to check the status of the stream, which is returned in StreamStatus. If the stream is in the ACTIVE state, you can call SplitShard. If a stream is in CREATING or UPDATING or DELETING states, DescribeStream returns a ResourceInUseException.

If the specified stream does not exist, DescribeStream returns a ResourceNotFoundException. If you try to create more shards than are authorized for your account, you receive a LimitExceededException.

For the default shard limit for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. To increase this limit, contact AWS Support.

If you try to operate on too many streams simultaneously using CreateStream, DeleteStream, MergeShards, and/or SplitShard, you receive a LimitExceededException.

SplitShard has a limit of 5 transactions per second per account.

" + }, + "StartStreamEncryption":{ + "name":"StartStreamEncryption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartStreamEncryptionInput"}, + "errors":[ + {"shape":"InvalidArgumentException"}, + {"shape":"LimitExceededException"}, + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"KMSDisabledException"}, + {"shape":"KMSInvalidStateException"}, + {"shape":"KMSAccessDeniedException"}, + {"shape":"KMSNotFoundException"}, + {"shape":"KMSOptInRequired"}, + {"shape":"KMSThrottlingException"} + ], + "documentation":"

Enables or updates server-side encryption using an AWS KMS key for a specified stream.

Starting encryption is an asynchronous operation. Upon receiving the request, Kinesis Streams returns immediately and sets the status of the stream to UPDATING. After the update is complete, Kinesis Streams sets the status of the stream back to ACTIVE. Updating or applying encryption normally takes a few seconds to complete, but it can take minutes. You can continue to read and write data to your stream while its status is UPDATING. Once the status of the stream is ACTIVE, encryption begins for records written to the stream.

API Limits: You can successfully apply a new AWS KMS key for server-side encryption 25 times in a rolling 24-hour period.

Note: It can take up to five seconds after the stream is in an ACTIVE status before all records written to the stream are encrypted. After you enable encryption, you can verify that encryption is applied by inspecting the API response from PutRecord or PutRecords.

" + }, + "StopStreamEncryption":{ + "name":"StopStreamEncryption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopStreamEncryptionInput"}, + "errors":[ + {"shape":"InvalidArgumentException"}, + {"shape":"LimitExceededException"}, + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Disables server-side encryption for a specified stream.

Stopping encryption is an asynchronous operation. Upon receiving the request, Kinesis Streams returns immediately and sets the status of the stream to UPDATING. After the update is complete, Kinesis Streams sets the status of the stream back to ACTIVE. Stopping encryption normally takes a few seconds to complete, but it can take minutes. You can continue to read and write data to your stream while its status is UPDATING. Once the status of the stream is ACTIVE, records written to the stream are no longer encrypted by Kinesis Streams.

API Limits: You can successfully disable server-side encryption 25 times in a rolling 24-hour period.

Note: It can take up to five seconds after the stream is in an ACTIVE status before all records written to the stream are no longer subject to encryption. After you disabled encryption, you can verify that encryption is not applied by inspecting the API response from PutRecord or PutRecords.

" }, "UpdateShardCount":{ "name":"UpdateShardCount", @@ -289,7 +358,7 @@ {"shape":"ResourceInUseException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"

Updates the shard count of the specified stream to the specified number of shards.

Updating the shard count is an asynchronous operation. Upon receiving the request, Amazon Kinesis returns immediately and sets the status of the stream to UPDATING. After the update is complete, Amazon Kinesis sets the status of the stream back to ACTIVE. Depending on the size of the stream, the scaling action could take a few minutes to complete. You can continue to read and write data to your stream while its status is UPDATING.

To update the shard count, Amazon Kinesis performs splits and merges and individual shards. This can cause short-lived shards to be created, in addition to the final shards. We recommend that you double or halve the shard count, as this results in the fewest number of splits or merges.

This operation has a rate limit of twice per rolling 24 hour period. You cannot scale above double your current shard count, scale below half your current shard count, or exceed the shard limits for your account.

For the default limits for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. If you need to increase a limit, contact AWS Support.

" + "documentation":"

Updates the shard count of the specified stream to the specified number of shards.

Updating the shard count is an asynchronous operation. Upon receiving the request, Kinesis Streams returns immediately and sets the status of the stream to UPDATING. After the update is complete, Kinesis Streams sets the status of the stream back to ACTIVE. Depending on the size of the stream, the scaling action could take a few minutes to complete. You can continue to read and write data to your stream while its status is UPDATING.

To update the shard count, Kinesis Streams performs splits or merges on individual shards. This can cause short-lived shards to be created, in addition to the final shards. We recommend that you double or halve the shard count, as this results in the fewest number of splits or merges.

This operation has the following limits, which are per region per account unless otherwise noted. You cannot:

For the default limits for an AWS account, see Streams Limits in the Amazon Kinesis Streams Developer Guide. To increase a limit, contact AWS Support.

" } }, "shapes":{ @@ -321,7 +390,7 @@ "members":{ "StreamName":{ "shape":"StreamName", - "documentation":"

A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by region. That is, two streams in two different AWS accounts can have the same name, and two streams in the same AWS account but in two different regions can have the same name.

" + "documentation":"

A name to identify the stream. The stream name is scoped to the AWS account used by the application that creates the stream. It is also scoped by region. That is, two streams in two different AWS accounts can have the same name. Two streams in the same AWS account but in two different regions can also have the same name.

" }, "ShardCount":{ "shape":"PositiveIntegerObject", @@ -421,6 +490,26 @@ }, "documentation":"

Represents the output for DescribeStream.

" }, + "DescribeStreamSummaryInput":{ + "type":"structure", + "required":["StreamName"], + "members":{ + "StreamName":{ + "shape":"StreamName", + "documentation":"

The name of the stream to describe.

" + } + } + }, + "DescribeStreamSummaryOutput":{ + "type":"structure", + "required":["StreamDescriptionSummary"], + "members":{ + "StreamDescriptionSummary":{ + "shape":"StreamDescriptionSummary", + "documentation":"

A StreamDescriptionSummary containing information about the stream.

" + } + } + }, "DisableEnhancedMonitoringInput":{ "type":"structure", "required":[ @@ -430,7 +519,7 @@ "members":{ "StreamName":{ "shape":"StreamName", - "documentation":"

The name of the Amazon Kinesis stream for which to disable enhanced monitoring.

" + "documentation":"

The name of the Kinesis stream for which to disable enhanced monitoring.

" }, "ShardLevelMetrics":{ "shape":"MetricsNameList", @@ -457,6 +546,13 @@ }, "documentation":"

Represents the input for EnableEnhancedMonitoring.

" }, + "EncryptionType":{ + "type":"string", + "enum":[ + "NONE", + "KMS" + ] + }, "EnhancedMetrics":{ "type":"structure", "members":{ @@ -476,7 +572,7 @@ "members":{ "StreamName":{ "shape":"StreamName", - "documentation":"

The name of the Amazon Kinesis stream.

" + "documentation":"

The name of the Kinesis stream.

" }, "CurrentShardLevelMetrics":{ "shape":"MetricsNameList", @@ -532,11 +628,11 @@ }, "NextShardIterator":{ "shape":"ShardIterator", - "documentation":"

The next position in the shard from which to start sequentially reading data records. If set to null, the shard has been closed and the requested iterator will not return any more data.

" + "documentation":"

The next position in the shard from which to start sequentially reading data records. If set to null, the shard has been closed and the requested iterator does not return any more data.

" }, "MillisBehindLatest":{ "shape":"MillisBehindLatest", - "documentation":"

The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates record processing is caught up, and there are no new records to process at this moment.

" + "documentation":"

The number of milliseconds the GetRecords response is from the tip of the stream, indicating how far behind current time the consumer is. A value of zero indicates that record processing is caught up, and there are no new records to process at this moment.

" } }, "documentation":"

Represents the output for GetRecords.

" @@ -555,11 +651,11 @@ }, "ShardId":{ "shape":"ShardId", - "documentation":"

The shard ID of the Amazon Kinesis shard to get the iterator for.

" + "documentation":"

The shard ID of the Kinesis Streams shard to get the iterator for.

" }, "ShardIteratorType":{ "shape":"ShardIteratorType", - "documentation":"

Determines how the shard iterator is used to start reading data records from the shard.

The following are the valid Amazon Kinesis shard iterator types:

" + "documentation":"

Determines how the shard iterator is used to start reading data records from the shard.

The following are the valid Amazon Kinesis shard iterator types:

" }, "StartingSequenceNumber":{ "shape":"SequenceNumber", @@ -567,7 +663,7 @@ }, "Timestamp":{ "shape":"Timestamp", - "documentation":"

The timestamp of the data record from which to start reading. Used with shard iterator type AT_TIMESTAMP. A timestamp is the Unix epoch date with precision in milliseconds. For example, 2016-04-04T19:58:46.480-00:00 or 1459799926.480. If a record with this exact timestamp does not exist, the iterator returned is for the next (later) record. If the timestamp is older than the current trim horizon, the iterator returned is for the oldest untrimmed data record (TRIM_HORIZON).

" + "documentation":"

The time stamp of the data record from which to start reading. Used with shard iterator type AT_TIMESTAMP. A time stamp is the Unix epoch date with precision in milliseconds. For example, 2016-04-04T19:58:46.480-00:00 or 1459799926.480. If a record with this exact time stamp does not exist, the iterator returned is for the next (later) record. If the time stamp is older than the current trim horizon, the iterator returned is for the oldest untrimmed data record (TRIM_HORIZON).

" } }, "documentation":"

Represents the input for GetShardIterator.

" @@ -633,6 +729,77 @@ "documentation":"

A specified parameter exceeds its restrictions, is not supported, or can't be used. For more information, see the returned message.

", "exception":true }, + "KMSAccessDeniedException":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The ciphertext references a key that doesn't exist or that you don't have access to.

", + "exception":true + }, + "KMSDisabledException":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The request was rejected because the specified customer master key (CMK) isn't enabled.

", + "exception":true + }, + "KMSInvalidStateException":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The request was rejected because the state of the specified resource isn't valid for this request. For more information, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.

", + "exception":true + }, + "KMSNotFoundException":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The request was rejected because the specified entity or resource can't be found.

", + "exception":true + }, + "KMSOptInRequired":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The AWS access key ID needs a subscription for the service.

", + "exception":true + }, + "KMSThrottlingException":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

A message that provides information about the error.

" + } + }, + "documentation":"

The request was denied due to request throttling. For more information about throttling, see Limits in the AWS Key Management Service Developer Guide.

", + "exception":true + }, + "KeyId":{ + "type":"string", + "max":2048, + "min":1 + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -641,7 +808,7 @@ "documentation":"

A message that provides information about the error.

" } }, - "documentation":"

The requested resource exceeds the maximum number allowed, or the number of concurrent stream requests exceeds the maximum number allowed (5).

", + "documentation":"

The requested resource exceeds the maximum number allowed, or the number of concurrent stream requests exceeds the maximum number allowed.

", "exception":true }, "ListStreamsInput":{ @@ -816,7 +983,7 @@ }, "SequenceNumberForOrdering":{ "shape":"SequenceNumber", - "documentation":"

Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the result when putting record n-1). If this parameter is not set, records will be coarsely ordered based on arrival time.

" + "documentation":"

Guarantees strictly increasing sequence numbers, for puts from the same client and to the same partition key. Usage: set the SequenceNumberForOrdering of record n to the sequence number of record n-1 (as returned in the result when putting record n-1). If this parameter is not set, records are coarsely ordered based on arrival time.

" } }, "documentation":"

Represents the input for PutRecord.

" @@ -835,6 +1002,10 @@ "SequenceNumber":{ "shape":"SequenceNumber", "documentation":"

The sequence number identifier that was assigned to the put data record. The sequence number for the record is unique across all records in the stream. A sequence number is the identifier associated with every record put into the stream.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type to use on the record. This parameter can be one of the following values:

" } }, "documentation":"

Represents the output for PutRecord.

" @@ -868,6 +1039,10 @@ "Records":{ "shape":"PutRecordsResultEntryList", "documentation":"

An array of successfully and unsuccessfully processed record results, correlated with the request by natural ordering. A record that is successfully added to a stream includes SequenceNumber and ShardId in the result. A record that fails to be added to a stream includes ErrorCode and ErrorMessage in the result.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type used on the records. This parameter can be one of the following values:

" } }, "documentation":"

PutRecords results.

" @@ -938,7 +1113,7 @@ "members":{ "SequenceNumber":{ "shape":"SequenceNumber", - "documentation":"

The unique identifier of the record in the stream.

" + "documentation":"

The unique identifier of the record within its shard.

" }, "ApproximateArrivalTimestamp":{ "shape":"Timestamp", @@ -946,14 +1121,18 @@ }, "Data":{ "shape":"Data", - "documentation":"

The data blob. The data in the blob is both opaque and immutable to the Amazon Kinesis service, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).

" + "documentation":"

The data blob. The data in the blob is both opaque and immutable to Kinesis Streams, which does not inspect, interpret, or change the data in the blob in any way. When the data blob (the payload before base64-encoding) is added to the partition key size, the total size must not exceed the maximum record size (1 MB).

" }, "PartitionKey":{ "shape":"PartitionKey", "documentation":"

Identifies which shard in the stream the data record is assigned to.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type used on the record. This parameter can be one of the following values:

" } }, - "documentation":"

The unit of data of the Amazon Kinesis stream, which is composed of a sequence number, a partition key, and a data blob.

" + "documentation":"

The unit of data of the Kinesis stream, which is composed of a sequence number, a partition key, and a data blob.

" }, "RecordList":{ "type":"list", @@ -985,7 +1164,7 @@ "documentation":"

A message that provides information about the error.

" } }, - "documentation":"

The resource is not available for this operation. For successful operation, the resource needs to be in the ACTIVE state.

", + "documentation":"

The resource is not available for this operation. For successful operation, the resource must be in the ACTIVE state.

", "exception":true }, "ResourceNotFoundException":{ @@ -1051,7 +1230,7 @@ "documentation":"

The range of possible sequence numbers for the shard.

" } }, - "documentation":"

A uniquely identified group of data records in an Amazon Kinesis stream.

" + "documentation":"

A uniquely identified group of data records in a Kinesis stream.

" }, "ShardCountObject":{ "type":"integer", @@ -1106,6 +1285,50 @@ }, "documentation":"

Represents the input for SplitShard.

" }, + "StartStreamEncryptionInput":{ + "type":"structure", + "required":[ + "StreamName", + "EncryptionType", + "KeyId" + ], + "members":{ + "StreamName":{ + "shape":"StreamName", + "documentation":"

The name of the stream for which to start encrypting records.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type to use. The only valid value is KMS.

" + }, + "KeyId":{ + "shape":"KeyId", + "documentation":"

The GUID for the customer-managed KMS key to use for encryption. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".You can also use a master key owned by Kinesis Streams by specifying the alias aws/kinesis.

" + } + } + }, + "StopStreamEncryptionInput":{ + "type":"structure", + "required":[ + "StreamName", + "EncryptionType", + "KeyId" + ], + "members":{ + "StreamName":{ + "shape":"StreamName", + "documentation":"

The name of the stream on which to stop encrypting records.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type. The only valid value is KMS.

" + }, + "KeyId":{ + "shape":"KeyId", + "documentation":"

The GUID for the customer-managed KMS key to use for encryption. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".You can also use a master key owned by Kinesis Streams by specifying the alias aws/kinesis.

" + } + } + }, "StreamARN":{"type":"string"}, "StreamDescription":{ "type":"structure", @@ -1130,7 +1353,7 @@ }, "StreamStatus":{ "shape":"StreamStatus", - "documentation":"

The current status of the stream being described. The stream status is one of the following states:

" + "documentation":"

The current status of the stream being described. The stream status is one of the following states:

" }, "Shards":{ "shape":"ShardList", @@ -1151,10 +1374,69 @@ "EnhancedMonitoring":{ "shape":"EnhancedMonitoringList", "documentation":"

Represents the current enhanced monitoring settings of the stream.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The server-side encryption type used on the stream. This parameter can be one of the following values:

" + }, + "KeyId":{ + "shape":"KeyId", + "documentation":"

The GUID for the customer-managed KMS key to use for encryption. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".You can also use a master key owned by Kinesis Streams by specifying the alias aws/kinesis.

" } }, "documentation":"

Represents the output for DescribeStream.

" }, + "StreamDescriptionSummary":{ + "type":"structure", + "required":[ + "StreamName", + "StreamARN", + "StreamStatus", + "RetentionPeriodHours", + "StreamCreationTimestamp", + "EnhancedMonitoring", + "OpenShardCount" + ], + "members":{ + "StreamName":{ + "shape":"StreamName", + "documentation":"

The name of the stream being described.

" + }, + "StreamARN":{ + "shape":"StreamARN", + "documentation":"

The Amazon Resource Name (ARN) for the stream being described.

" + }, + "StreamStatus":{ + "shape":"StreamStatus", + "documentation":"

The current status of the stream being described. The stream status is one of the following states:

" + }, + "RetentionPeriodHours":{ + "shape":"PositiveIntegerObject", + "documentation":"

The current retention period, in hours.

" + }, + "StreamCreationTimestamp":{ + "shape":"Timestamp", + "documentation":"

The approximate time that the stream was created.

" + }, + "EnhancedMonitoring":{ + "shape":"EnhancedMonitoringList", + "documentation":"

Represents the current enhanced monitoring settings of the stream.

" + }, + "EncryptionType":{ + "shape":"EncryptionType", + "documentation":"

The encryption type used. This value is one of the following:

" + }, + "KeyId":{ + "shape":"KeyId", + "documentation":"

The GUID for the customer-managed KMS key to use for encryption. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".You can also use a master key owned by Kinesis Streams by specifying the alias aws/kinesis.

" + }, + "OpenShardCount":{ + "shape":"ShardCountObject", + "documentation":"

The number of open shards in the stream.

" + } + }, + "documentation":"

Represents the output for DescribeStreamSummary

" + }, "StreamName":{ "type":"string", "max":128, diff --git a/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/customization.config b/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/customization.config index 82915694d60f..390560803fe3 100644 --- a/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/customization.config +++ b/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/customization.config @@ -2,5 +2,8 @@ "authPolicyActions" : { "skip": true }, - "skipSmokeTests": "true" + "skipSmokeTests": "true", + "blacklistedSimpleMethods" : [ + "discoverInputSchema" + ] } diff --git a/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/service-2.json b/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/service-2.json index 9863f3233f7e..573eedefcc21 100644 --- a/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/service-2.json +++ b/services/kinesis/src/main/resources/codegen-resources/kinesisanalytics/service-2.json @@ -27,7 +27,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Adds a CloudWatch log stream to monitor application configuration errors. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Monitoring Configuration Errors.

" + "documentation":"

Adds a CloudWatch log stream to monitor application configuration errors. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Working with Amazon CloudWatch Logs.

" }, "AddApplicationInput":{ "name":"AddApplicationInput", @@ -41,10 +41,27 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ResourceInUseException"}, {"shape":"InvalidArgumentException"}, - {"shape":"ConcurrentModificationException"} + {"shape":"ConcurrentModificationException"}, + {"shape":"CodeValidationException"} ], "documentation":"

Adds a streaming source to your Amazon Kinesis application. For conceptual information, see Configuring Application Input.

You can add a streaming source either when you create an application or you can use this operation to add a streaming source after you create an application. For more information, see CreateApplication.

Any configuration update, including adding a streaming source using this operation, results in a new version of the application. You can use the DescribeApplication operation to find the current application version.

This operation requires permissions to perform the kinesisanalytics:AddApplicationInput action.

" }, + "AddApplicationInputProcessingConfiguration":{ + "name":"AddApplicationInputProcessingConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AddApplicationInputProcessingConfigurationRequest"}, + "output":{"shape":"AddApplicationInputProcessingConfigurationResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceInUseException"}, + {"shape":"InvalidArgumentException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Adds an InputProcessingConfiguration to an application. An input processor preprocesses records on the input stream before the application's SQL code executes. Currently, the only input processor available is AWS Lambda.

" + }, "AddApplicationOutput":{ "name":"AddApplicationOutput", "http":{ @@ -122,7 +139,23 @@ {"shape":"InvalidArgumentException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Deletes a CloudWatch log stream from an application. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Monitoring Configuration Errors.

" + "documentation":"

Deletes a CloudWatch log stream from an application. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Working with Amazon CloudWatch Logs.

" + }, + "DeleteApplicationInputProcessingConfiguration":{ + "name":"DeleteApplicationInputProcessingConfiguration", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteApplicationInputProcessingConfigurationRequest"}, + "output":{"shape":"DeleteApplicationInputProcessingConfigurationResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceInUseException"}, + {"shape":"InvalidArgumentException"}, + {"shape":"ConcurrentModificationException"} + ], + "documentation":"

Deletes an InputProcessingConfiguration from an input.

" }, "DeleteApplicationOutput":{ "name":"DeleteApplicationOutput", @@ -180,7 +213,8 @@ "errors":[ {"shape":"InvalidArgumentException"}, {"shape":"UnableToDetectSchemaException"}, - {"shape":"ResourceProvisionedThroughputExceededException"} + {"shape":"ResourceProvisionedThroughputExceededException"}, + {"shape":"ServiceUnavailableException"} ], "documentation":"

Infers a schema by evaluating sample records on the specified streaming source (Amazon Kinesis stream or Amazon Kinesis Firehose delivery stream). In the response, the operation returns the inferred schema and also the sample records that the operation used to infer the schema.

You can use the inferred schema when configuring a streaming source for your application. For conceptual information, see Configuring Application Input. Note that when you create an application using the Amazon Kinesis Analytics console, the console uses this operation to infer a schema and show it in the console user interface.

This operation requires permissions to perform the kinesisanalytics:DiscoverInputSchema action.

" }, @@ -253,15 +287,15 @@ "members":{ "ApplicationName":{ "shape":"ApplicationName", - "documentation":"

The Amazon Kinesis Analytics application name.

" + "documentation":"

The Kinesis Analytics application name.

" }, "CurrentApplicationVersionId":{ "shape":"ApplicationVersionId", - "documentation":"

The version ID of the Amazon Kinesis Analytics application.

" + "documentation":"

The version ID of the Kinesis Analytics application.

" }, "CloudWatchLoggingOption":{ "shape":"CloudWatchLoggingOption", - "documentation":"

Provide the CloudWatch log stream ARN and the IAM role ARN. Note: To write application messages to CloudWatch, the IAM role used must have the PutLogEvents policy action enabled.

" + "documentation":"

Provides the CloudWatch log stream Amazon Resource Name (ARN) and the IAM role ARN. Note: To write application messages to CloudWatch, the IAM role that is used must have the PutLogEvents policy action enabled.

" } } }, @@ -270,6 +304,38 @@ "members":{ } }, + "AddApplicationInputProcessingConfigurationRequest":{ + "type":"structure", + "required":[ + "ApplicationName", + "CurrentApplicationVersionId", + "InputId", + "InputProcessingConfiguration" + ], + "members":{ + "ApplicationName":{ + "shape":"ApplicationName", + "documentation":"

Name of the application to which you want to add the input processing configuration.

" + }, + "CurrentApplicationVersionId":{ + "shape":"ApplicationVersionId", + "documentation":"

Version of the application to which you want to add the input processing configuration. You can use the DescribeApplication operation to get the current application version. If the version specified is not the current version, the ConcurrentModificationException is returned.

" + }, + "InputId":{ + "shape":"Id", + "documentation":"

The ID of the input configuration to which to add the input configuration. You can get a list of the input IDs for an application using the DescribeApplication operation.

" + }, + "InputProcessingConfiguration":{ + "shape":"InputProcessingConfiguration", + "documentation":"

The InputProcessingConfiguration to add to the application.

" + } + } + }, + "AddApplicationInputProcessingConfigurationResponse":{ + "type":"structure", + "members":{ + } + }, "AddApplicationInputRequest":{ "type":"structure", "required":[ @@ -288,7 +354,7 @@ }, "Input":{ "shape":"Input", - "documentation":"

" + "documentation":"

The Input to add.

" } }, "documentation":"

" @@ -414,7 +480,7 @@ }, "CloudWatchLoggingOptionDescriptions":{ "shape":"CloudWatchLoggingOptionDescriptions", - "documentation":"

Describes the CloudWatch log streams configured to receive application messages. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Monitoring Configuration Errors.

" + "documentation":"

Describes the CloudWatch log streams that are configured to receive application messages. For more information about using CloudWatch log streams with Amazon Kinesis Analytics applications, see Working with Amazon CloudWatch Logs.

" }, "ApplicationCode":{ "shape":"ApplicationCode", @@ -540,10 +606,10 @@ }, "RoleARN":{ "shape":"RoleARN", - "documentation":"

IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the IAM role used must have the PutLogEvents policy action enabled.

" + "documentation":"

IAM ARN of the role to use to send application messages. Note: To write application messages to CloudWatch, the IAM role that is used must have the PutLogEvents policy action enabled.

" } }, - "documentation":"

Provides a description of CloudWatch logging options, including the log stream ARN and the role ARN.

" + "documentation":"

Provides a description of CloudWatch logging options, including the log stream Amazon Resource Name (ARN) and the role ARN.

" }, "CloudWatchLoggingOptionDescription":{ "type":"structure", @@ -642,7 +708,7 @@ }, "CloudWatchLoggingOptions":{ "shape":"CloudWatchLoggingOptions", - "documentation":"

Use this parameter to configure a CloudWatch log stream to monitor application configuration errors. For more information, see Monitoring Configuration Errors.

" + "documentation":"

Use this parameter to configure a CloudWatch log stream to monitor application configuration errors. For more information, see Working with Amazon CloudWatch Logs.

" }, "ApplicationCode":{ "shape":"ApplicationCode", @@ -672,11 +738,11 @@ "members":{ "ApplicationName":{ "shape":"ApplicationName", - "documentation":"

The Amazon Kinesis Analytics application name.

" + "documentation":"

The Kinesis Analytics application name.

" }, "CurrentApplicationVersionId":{ "shape":"ApplicationVersionId", - "documentation":"

The version ID of the Amazon Kinesis Analytics application.

" + "documentation":"

The version ID of the Kinesis Analytics application.

" }, "CloudWatchLoggingOptionId":{ "shape":"Id", @@ -689,6 +755,33 @@ "members":{ } }, + "DeleteApplicationInputProcessingConfigurationRequest":{ + "type":"structure", + "required":[ + "ApplicationName", + "CurrentApplicationVersionId", + "InputId" + ], + "members":{ + "ApplicationName":{ + "shape":"ApplicationName", + "documentation":"

The Kinesis Analytics application name.

" + }, + "CurrentApplicationVersionId":{ + "shape":"ApplicationVersionId", + "documentation":"

The version ID of the Kinesis Analytics application.

" + }, + "InputId":{ + "shape":"Id", + "documentation":"

The ID of the input configuration from which to delete the input configuration. You can get a list of the input IDs for an application using the DescribeApplication operation.

" + } + } + }, + "DeleteApplicationInputProcessingConfigurationResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteApplicationOutputRequest":{ "type":"structure", "required":[ @@ -803,11 +896,6 @@ }, "DiscoverInputSchemaRequest":{ "type":"structure", - "required":[ - "ResourceARN", - "RoleARN", - "InputStartingPositionConfiguration" - ], "members":{ "ResourceARN":{ "shape":"ResourceARN", @@ -820,9 +908,13 @@ "InputStartingPositionConfiguration":{ "shape":"InputStartingPositionConfiguration", "documentation":"

Point at which you want Amazon Kinesis Analytics to start reading records from the specified streaming source discovery purposes.

" + }, + "S3Configuration":{"shape":"S3Configuration"}, + "InputProcessingConfiguration":{ + "shape":"InputProcessingConfiguration", + "documentation":"

The InputProcessingConfiguration to use to preprocess the records before discovering the schema of the records.

" } - }, - "documentation":"

" + } }, "DiscoverInputSchemaResponse":{ "type":"structure", @@ -835,6 +927,10 @@ "shape":"ParsedInputRecords", "documentation":"

An array of elements, where each element corresponds to a row in a stream record (a stream record can have more than one row).

" }, + "ProcessedInputRecords":{ + "shape":"ProcessedInputRecords", + "documentation":"

Stream data that was modified by the processor specified in the InputProcessingConfiguration parameter.

" + }, "RawInputRecords":{ "shape":"RawInputRecords", "documentation":"

Raw stream data that was sampled to infer the schema.

" @@ -843,7 +939,11 @@ "documentation":"

" }, "ErrorMessage":{"type":"string"}, - "FileKey":{"type":"string"}, + "FileKey":{ + "type":"string", + "max":1024, + "min":1 + }, "Id":{ "type":"string", "max":50, @@ -877,13 +977,17 @@ "shape":"InAppStreamName", "documentation":"

Name prefix to use when creating in-application stream. Suppose you specify a prefix \"MyInApplicationStream\". Amazon Kinesis Analytics will then create one or more (as per the InputParallelism count you specified) in-application streams with names \"MyInApplicationStream_001\", \"MyInApplicationStream_002\" and so on.

" }, + "InputProcessingConfiguration":{ + "shape":"InputProcessingConfiguration", + "documentation":"

The InputProcessingConfiguration for the Input. An input processor transforms records as they are received from the stream, before the application's SQL code executes. Currently, the only input processing configuration available is InputLambdaProcessor.

" + }, "KinesisStreamsInput":{ "shape":"KinesisStreamsInput", - "documentation":"

If the streaming source is an Amazon Kinesis stream, identifies the stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

" + "documentation":"

If the streaming source is an Amazon Kinesis stream, identifies the stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

Note: Either KinesisStreamsInput or KinesisFirehoseInput is required.

" }, "KinesisFirehoseInput":{ "shape":"KinesisFirehoseInput", - "documentation":"

If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the Firehose delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

" + "documentation":"

If the streaming source is an Amazon Kinesis Firehose delivery stream, identifies the Firehose delivery stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

Note: Either KinesisStreamsInput or KinesisFirehoseInput is required.

" }, "InputParallelism":{ "shape":"InputParallelism", @@ -933,6 +1037,10 @@ "shape":"InAppStreamNames", "documentation":"

Returns the in-application stream names that are mapped to the stream source.

" }, + "InputProcessingConfigurationDescription":{ + "shape":"InputProcessingConfigurationDescription", + "documentation":"

The description of the preprocessor that executes on records in this input before the application's code is run.

" + }, "KinesisStreamsInputDescription":{ "shape":"KinesisStreamsInputDescription", "documentation":"

If an Amazon Kinesis stream is configured as streaming source, provides Amazon Kinesis stream's ARN and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

" @@ -941,7 +1049,10 @@ "shape":"KinesisFirehoseInputDescription", "documentation":"

If an Amazon Kinesis Firehose delivery stream is configured as a streaming source, provides the Firehose delivery stream's Amazon Resource Name (ARN) and an IAM role that enables Amazon Kinesis Analytics to access the stream on your behalf.

" }, - "InputSchema":{"shape":"SourceSchema"}, + "InputSchema":{ + "shape":"SourceSchema", + "documentation":"

Describes the format of the data in the streaming source, and how each data element maps to corresponding columns in the in-application stream that is being created.

" + }, "InputParallelism":{ "shape":"InputParallelism", "documentation":"

Describes the configured parallelism (number of in-application streams mapped to the streaming source).

" @@ -957,6 +1068,52 @@ "type":"list", "member":{"shape":"InputDescription"} }, + "InputLambdaProcessor":{ + "type":"structure", + "required":[ + "ResourceARN", + "RoleARN" + ], + "members":{ + "ResourceARN":{ + "shape":"ResourceARN", + "documentation":"

The ARN of the AWS Lambda function that operates on records in the stream.

" + }, + "RoleARN":{ + "shape":"RoleARN", + "documentation":"

The ARN of the IAM role used to access the AWS Lambda function.

" + } + }, + "documentation":"

An object that contains the ARN of the AWS Lambda function that is used to preprocess records in the stream, and the ARN of the IAM role used to access the AWS Lambda function.

" + }, + "InputLambdaProcessorDescription":{ + "type":"structure", + "members":{ + "ResourceARN":{ + "shape":"ResourceARN", + "documentation":"

The ARN of the AWS Lambda function that is used to preprocess the records in the stream.

" + }, + "RoleARN":{ + "shape":"RoleARN", + "documentation":"

The ARN of the IAM role used to access the AWS Lambda function.

" + } + }, + "documentation":"

An object that contains the ARN of the AWS Lambda function that is used to preprocess records in the stream, and the ARN of the IAM role used to access the AWS Lambda expression.

" + }, + "InputLambdaProcessorUpdate":{ + "type":"structure", + "members":{ + "ResourceARNUpdate":{ + "shape":"ResourceARN", + "documentation":"

The ARN of the new AWS Lambda function that is used to preprocess the records in the stream.

" + }, + "RoleARNUpdate":{ + "shape":"RoleARN", + "documentation":"

The ARN of the new IAM role used to access the AWS Lambda function.

" + } + }, + "documentation":"

Represents an update to the InputLambdaProcessor that is used to preprocess the records in the stream.

" + }, "InputParallelism":{ "type":"structure", "members":{ @@ -969,7 +1126,7 @@ }, "InputParallelismCount":{ "type":"integer", - "max":10, + "max":64, "min":1 }, "InputParallelismUpdate":{ @@ -982,6 +1139,38 @@ }, "documentation":"

Provides updates to the parallelism count.

" }, + "InputProcessingConfiguration":{ + "type":"structure", + "required":["InputLambdaProcessor"], + "members":{ + "InputLambdaProcessor":{ + "shape":"InputLambdaProcessor", + "documentation":"

The InputLambdaProcessor that is used to preprocess the records in the stream prior to being processed by your application code.

" + } + }, + "documentation":"

Provides a description of a processor that is used to preprocess the records in the stream prior to being processed by your application code. Currently, the only input processor available is AWS Lambda.

" + }, + "InputProcessingConfigurationDescription":{ + "type":"structure", + "members":{ + "InputLambdaProcessorDescription":{ + "shape":"InputLambdaProcessorDescription", + "documentation":"

Provides configuration information about the associated InputLambdaProcessorDescription.

" + } + }, + "documentation":"

Provides configuration information about an input processor. Currently, the only input processor available is AWS Lambda.

" + }, + "InputProcessingConfigurationUpdate":{ + "type":"structure", + "required":["InputLambdaProcessorUpdate"], + "members":{ + "InputLambdaProcessorUpdate":{ + "shape":"InputLambdaProcessorUpdate", + "documentation":"

Provides update information for an InputLambdaProcessor.

" + } + }, + "documentation":"

Describes updates to an InputProcessingConfiguration.

" + }, "InputSchemaUpdate":{ "type":"structure", "members":{ @@ -998,7 +1187,7 @@ "documentation":"

A list of RecordColumn objects. Each object describes the mapping of the streaming source element to the corresponding column in the in-application stream.

" } }, - "documentation":"

Describes updates for the application's input schema.

" + "documentation":"

Describes updates for the application's input schema.

" }, "InputStartingPosition":{ "type":"string", @@ -1030,6 +1219,10 @@ "shape":"InAppStreamName", "documentation":"

Name prefix for in-application streams that Amazon Kinesis Analytics creates for the specific streaming source.

" }, + "InputProcessingConfigurationUpdate":{ + "shape":"InputProcessingConfigurationUpdate", + "documentation":"

Describes updates for an input processing configuration.

" + }, "KinesisStreamsInputUpdate":{ "shape":"KinesisStreamsInputUpdate", "documentation":"

If a Amazon Kinesis stream is the streaming source to be updated, provides an updated stream ARN and IAM role ARN.

" @@ -1085,7 +1278,7 @@ "members":{ "RecordRowPath":{ "shape":"RecordRowPath", - "documentation":"

Path to the top-level parent that contains the records.

For example, consider the following JSON record:

In the RecordRowPath, \"$\" refers to the root and path \"$.vehicle.Model\" refers to the specific \"Model\" key in the JSON.

" + "documentation":"

Path to the top-level parent that contains the records.

" } }, "documentation":"

Provides additional mapping information when JSON is the record format on the streaming source.

" @@ -1436,6 +1629,11 @@ "type":"list", "member":{"shape":"ParsedInputRecord"} }, + "ProcessedInputRecord":{"type":"string"}, + "ProcessedInputRecords":{ + "type":"list", + "member":{"shape":"ProcessedInputRecord"} + }, "RawInputRecord":{"type":"string"}, "RawInputRecords":{ "type":"list", @@ -1470,9 +1668,12 @@ "RecordColumnMapping":{"type":"string"}, "RecordColumnName":{ "type":"string", - "pattern":"[a-zA-Z][a-zA-Z0-9_]+" + "pattern":"[a-zA-Z_][a-zA-Z0-9_]*" + }, + "RecordColumnSqlType":{ + "type":"string", + "min":1 }, - "RecordColumnSqlType":{"type":"string"}, "RecordColumns":{ "type":"list", "member":{"shape":"RecordColumn"}, @@ -1506,7 +1707,10 @@ "type":"string", "min":1 }, - "RecordRowPath":{"type":"string"}, + "RecordRowPath":{ + "type":"string", + "min":1 + }, "ReferenceDataSource":{ "type":"structure", "required":[ @@ -1579,7 +1783,7 @@ "type":"string", "max":2048, "min":1, - "pattern":"arn:[a-zA-Z0-9\\-]+:[a-zA-Z0-9\\-]+:[a-zA-Z0-9\\-]*:\\d{12}:[a-zA-Z_0-9+=,.@\\-_/:]+" + "pattern":"arn:.*" }, "ResourceInUseException":{ "type":"structure", @@ -1617,6 +1821,19 @@ "min":1, "pattern":"arn:aws:iam::\\d{12}:role/?[a-zA-Z_0-9+=,.@\\-_/]+" }, + "S3Configuration":{ + "type":"structure", + "required":[ + "RoleARN", + "BucketARN", + "FileKey" + ], + "members":{ + "RoleARN":{"shape":"RoleARN"}, + "BucketARN":{"shape":"BucketARN"}, + "FileKey":{"shape":"FileKey"} + } + }, "S3ReferenceDataSource":{ "type":"structure", "required":[ @@ -1681,6 +1898,15 @@ }, "documentation":"

Describes the S3 bucket name, object key name, and IAM role that Amazon Kinesis Analytics can assume to read the Amazon S3 object on your behalf and populate the in-application reference table.

" }, + "ServiceUnavailableException":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

The service is unavailable, back off and retry the operation.

", + "exception":true, + "fault":true + }, "SourceSchema":{ "type":"structure", "required":[ @@ -1749,7 +1975,8 @@ "type":"structure", "members":{ "message":{"shape":"ErrorMessage"}, - "RawInputRecords":{"shape":"RawInputRecords"} + "RawInputRecords":{"shape":"RawInputRecords"}, + "ProcessedInputRecords":{"shape":"ProcessedInputRecords"} }, "documentation":"

Data format is not valid, Amazon Kinesis Analytics is not able to detect schema for the given streaming source.

", "exception":true diff --git a/services/kinesis/src/main/resources/codegen-resources/kinesisfirehose/service-2.json b/services/kinesis/src/main/resources/codegen-resources/kinesisfirehose/service-2.json index 8ec8c84a777f..876957593f34 100644 --- a/services/kinesis/src/main/resources/codegen-resources/kinesisfirehose/service-2.json +++ b/services/kinesis/src/main/resources/codegen-resources/kinesisfirehose/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"Firehose", "serviceFullName":"Amazon Kinesis Firehose", + "serviceId":"Firehose", "signatureVersion":"v4", "targetPrefix":"Firehose_20150804", "uid":"firehose-2015-08-04" @@ -25,7 +26,7 @@ {"shape":"LimitExceededException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Creates a delivery stream.

By default, you can create up to 20 delivery streams per region.

This is an asynchronous operation that immediately returns. The initial status of the delivery stream is CREATING. After the delivery stream is created, its status is ACTIVE and it now accepts data. Attempts to send data to a delivery stream that is not in the ACTIVE state cause an exception. To check the state of a delivery stream, use DescribeDeliveryStream.

A delivery stream is configured with a single destination: Amazon S3, Amazon Elasticsearch Service, or Amazon Redshift. You must specify only one of the following destination configuration parameters: ExtendedS3DestinationConfiguration, S3DestinationConfiguration, ElasticsearchDestinationConfiguration, or RedshiftDestinationConfiguration.

When you specify S3DestinationConfiguration, you can also provide the following optional values: BufferingHints, EncryptionConfiguration, and CompressionFormat. By default, if no BufferingHints value is provided, Firehose buffers data up to 5 MB or for 5 minutes, whichever condition is satisfied first. Note that BufferingHints is a hint, so there are some cases where the service cannot adhere to these conditions strictly; for example, record boundaries are such that the size is a little over or under the configured buffering size. By default, no encryption is performed. We strongly recommend that you enable encryption to ensure secure data storage in Amazon S3.

A few notes about Amazon Redshift as a destination:

Firehose assumes the IAM role that is configured as part of the destination. The role should allow the Firehose principal to assume the role, and the role should have permissions that allows the service to deliver the data. For more information, see Amazon S3 Bucket Access in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

Creates a delivery stream.

By default, you can create up to 20 delivery streams per region.

This is an asynchronous operation that immediately returns. The initial status of the delivery stream is CREATING. After the delivery stream is created, its status is ACTIVE and it now accepts data. Attempts to send data to a delivery stream that is not in the ACTIVE state cause an exception. To check the state of a delivery stream, use DescribeDeliveryStream.

A Kinesis Firehose delivery stream can be configured to receive records directly from providers using PutRecord or PutRecordBatch, or it can be configured to use an existing Kinesis stream as its source. To specify a Kinesis stream as input, set the DeliveryStreamType parameter to KinesisStreamAsSource, and provide the Kinesis stream ARN and role ARN in the KinesisStreamSourceConfiguration parameter.

A delivery stream is configured with a single destination: Amazon S3, Amazon ES, or Amazon Redshift. You must specify only one of the following destination configuration parameters: ExtendedS3DestinationConfiguration, S3DestinationConfiguration, ElasticsearchDestinationConfiguration, or RedshiftDestinationConfiguration.

When you specify S3DestinationConfiguration, you can also provide the following optional values: BufferingHints, EncryptionConfiguration, and CompressionFormat. By default, if no BufferingHints value is provided, Kinesis Firehose buffers data up to 5 MB or for 5 minutes, whichever condition is satisfied first. Note that BufferingHints is a hint, so there are some cases where the service cannot adhere to these conditions strictly; for example, record boundaries are such that the size is a little over or under the configured buffering size. By default, no encryption is performed. We strongly recommend that you enable encryption to ensure secure data storage in Amazon S3.

A few notes about Amazon Redshift as a destination:

Kinesis Firehose assumes the IAM role that is configured as part of the destination. The role should allow the Kinesis Firehose principal to assume the role, and the role should have permissions that allow the service to deliver the data. For more information, see Amazon S3 Bucket Access in the Amazon Kinesis Firehose Developer Guide.

" }, "DeleteDeliveryStream":{ "name":"DeleteDeliveryStream", @@ -77,7 +78,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Writes a single data record into an Amazon Kinesis Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. Note that if you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Firehose Limits.

You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data, for example, a segment from a log file, geographic location data, web site clickstream data, etc.

Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline (\\n) or some other character unique within the data. This allows the consumer application(s) to parse individual data items when reading the data from the destination.

The PutRecord operation returns a RecordId, which is a unique string assigned to each record. Producer applications can use this ID for purposes such as auditability and investigation.

If the PutRecord operation throws a ServiceUnavailableException, back off and retry. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.

Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.

" + "documentation":"

Writes a single data record into an Amazon Kinesis Firehose delivery stream. To write multiple data records into a delivery stream, use PutRecordBatch. Applications using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. Note that if you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits and how to request an increase, see Amazon Kinesis Firehose Limits.

You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data, for example, a segment from a log file, geographic location data, website clickstream data, and so on.

Kinesis Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline (\\n) or some other character unique within the data. This allows the consumer application to parse individual data items when reading the data from the destination.

The PutRecord operation returns a RecordId, which is a unique string assigned to each record. Producer applications can use this ID for purposes such as auditability and investigation.

If the PutRecord operation throws a ServiceUnavailableException, back off and retry. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.

Data records sent to Kinesis Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.

" }, "PutRecordBatch":{ "name":"PutRecordBatch", @@ -92,7 +93,7 @@ {"shape":"InvalidArgumentException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. Note that if you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits, see Amazon Kinesis Firehose Limits.

Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before 64-bit encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.

You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data, for example, a segment from a log file, geographic location data, web site clickstream data, and so on.

Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline (\\n) or some other character unique within the data. This allows the consumer application(s) to parse individual data items when reading the data from the destination.

The PutRecordBatch response includes a count of failed records, FailedPutCount, and an array of responses, RequestResponses. Each entry in the RequestResponses array provides additional information about the processed record, and directly correlates with a record in the request array using the same ordering, from the top to the bottom. The response array always includes the same number of records as the request array. RequestResponses includes both successfully and unsuccessfully processed records. Firehose attempts to process all records in each PutRecordBatch request. A single record failure does not stop the processing of subsequent records.

A successfully processed record includes a RecordId value, which is unique for the record. An unsuccessfully processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error, and is one of the following values: ServiceUnavailable or InternalFailure. ErrorMessage provides more detailed information about the error.

If there is an internal server error or a timeout, the write might have completed or it might have failed. If FailedPutCount is greater than 0, retry the request, resending only those records that might have failed processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and corresponding charges). We recommend that you handle any duplicates at the destination.

If PutRecordBatch throws ServiceUnavailableException, back off and retry. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.

Data records sent to Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.

" + "documentation":"

Writes multiple data records into a delivery stream in a single call, which can achieve higher throughput per producer than when writing single records. To write single data records into a delivery stream, use PutRecord. Applications using these operations are referred to as producers.

By default, each delivery stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch, the limits are an aggregate across these two operations for each delivery stream. For more information about limits, see Amazon Kinesis Firehose Limits.

Each PutRecordBatch request supports up to 500 records. Each record in the request can be as large as 1,000 KB (before 64-bit encoding), up to a limit of 4 MB for the entire request. These limits cannot be changed.

You must specify the name of the delivery stream and the data record when using PutRecord. The data record consists of a data blob that can be up to 1,000 KB in size, and any kind of data. For example, it could be a segment from a log file, geographic location data, web site clickstream data, and so on.

Kinesis Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline (\\n) or some other character unique within the data. This allows the consumer application to parse individual data items when reading the data from the destination.

The PutRecordBatch response includes a count of failed records, FailedPutCount, and an array of responses, RequestResponses. Each entry in the RequestResponses array provides additional information about the processed record. It directly correlates with a record in the request array using the same ordering, from the top to the bottom. The response array always includes the same number of records as the request array. RequestResponses includes both successfully and unsuccessfully processed records. Kinesis Firehose attempts to process all records in each PutRecordBatch request. A single record failure does not stop the processing of subsequent records.

A successfully processed record includes a RecordId value, which is unique for the record. An unsuccessfully processed record includes ErrorCode and ErrorMessage values. ErrorCode reflects the type of error, and is one of the following values: ServiceUnavailable or InternalFailure. ErrorMessage provides more detailed information about the error.

If there is an internal server error or a timeout, the write might have completed or it might have failed. If FailedPutCount is greater than 0, retry the request, resending only those records that might have failed processing. This minimizes the possible duplicate records and also reduces the total bytes sent (and corresponding charges). We recommend that you handle any duplicates at the destination.

If PutRecordBatch throws ServiceUnavailableException, back off and retry. If the exception persists, it is possible that the throughput limits have been exceeded for the delivery stream.

Data records sent to Kinesis Firehose are stored for 24 hours from the time they are added to a delivery stream as it attempts to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.

" }, "UpdateDestination":{ "name":"UpdateDestination", @@ -108,7 +109,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ConcurrentModificationException"} ], - "documentation":"

Updates the specified destination of the specified delivery stream.

You can use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while the configurations are updated, so data writes to the delivery stream can continue during this process. The updated configurations are usually effective within a few minutes.

Note that switching between Amazon ES and other services is not supported. For an Amazon ES destination, you can only update to another Amazon ES destination.

If the destination type is the same, Firehose merges the configuration parameters specified with the destination configuration that already exists on the delivery stream. If any of the parameters are not specified in the call, the existing values are retained. For example, in the Amazon S3 destination, if EncryptionConfiguration is not specified then the existing EncryptionConfiguration is maintained on the destination.

If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, Firehose does not merge any parameters. In this case, all parameters must be specified.

Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions and conflicting merges. This is a required field, and the service updates the configuration only if the existing configuration has a version ID that matches. After the update is applied successfully, the version ID is updated, and can be retrieved using DescribeDeliveryStream. You should use the new version ID to set CurrentDeliveryStreamVersionId in the next call.

" + "documentation":"

Updates the specified destination of the specified delivery stream.

You can use this operation to change the destination type (for example, to replace the Amazon S3 destination with Amazon Redshift) or change the parameters associated with a destination (for example, to change the bucket name of the Amazon S3 destination). The update might not occur immediately. The target delivery stream remains active while the configurations are updated, so data writes to the delivery stream can continue during this process. The updated configurations are usually effective within a few minutes.

Note that switching between Amazon ES and other services is not supported. For an Amazon ES destination, you can only update to another Amazon ES destination.

If the destination type is the same, Kinesis Firehose merges the configuration parameters specified with the destination configuration that already exists on the delivery stream. If any of the parameters are not specified in the call, the existing values are retained. For example, in the Amazon S3 destination, if EncryptionConfiguration is not specified, then the existing EncryptionConfiguration is maintained on the destination.

If the destination type is not the same, for example, changing the destination from Amazon S3 to Amazon Redshift, Kinesis Firehose does not merge any parameters. In this case, all parameters must be specified.

Kinesis Firehose uses CurrentDeliveryStreamVersionId to avoid race conditions and conflicting merges. This is a required field, and the service updates the configuration only if the existing configuration has a version ID that matches. After the update is applied successfully, the version ID is updated, and can be retrieved using DescribeDeliveryStream. Use the new version ID to set CurrentDeliveryStreamVersionId in the next call.

" } }, "shapes":{ @@ -137,7 +138,7 @@ "documentation":"

Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300.

" } }, - "documentation":"

Describes hints for the buffering to perform before delivering data to the destination. Please note that these options are treated as hints, and therefore Firehose may choose to use different values when it is optimal.

" + "documentation":"

Describes hints for the buffering to perform before delivering data to the destination. Please note that these options are treated as hints, and therefore Kinesis Firehose may choose to use different values when it is optimal.

" }, "CloudWatchLoggingOptions":{ "type":"structure", @@ -155,7 +156,7 @@ "documentation":"

The CloudWatch log stream name for logging. This value is required if CloudWatch logging is enabled.

" } }, - "documentation":"

Describes the CloudWatch logging options for your delivery stream.

" + "documentation":"

Describes the Amazon CloudWatch logging options for your delivery stream.

" }, "ClusterJDBCURL":{ "type":"string", @@ -196,7 +197,7 @@ }, "CopyOptions":{ "shape":"CopyOptions", - "documentation":"

Optional parameters to use with the Amazon Redshift COPY command. For more information, see the \"Optional Parameters\" section of Amazon Redshift COPY command. Some possible examples that would apply to Firehose are as follows:

delimiter '\\t' lzop; - fields are delimited with \"\\t\" (TAB character) and compressed using lzop.

delimiter '| - fields are delimited with \"|\" (this is the default delimiter).

delimiter '|' escape - the delimiter should be escaped.

fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' - fields are fixed width in the source, with each width specified after every column in the table.

JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the path specified is the format of the data.

For more examples, see Amazon Redshift COPY command examples.

" + "documentation":"

Optional parameters to use with the Amazon Redshift COPY command. For more information, see the \"Optional Parameters\" section of Amazon Redshift COPY command. Some possible examples that would apply to Kinesis Firehose are as follows:

delimiter '\\t' lzop; - fields are delimited with \"\\t\" (TAB character) and compressed using lzop.

delimiter '|' - fields are delimited with \"|\" (this is the default delimiter).

delimiter '|' escape - the delimiter should be escaped.

fixedwidth 'venueid:3,venuename:25,venuecity:12,venuestate:2,venueseats:6' - fields are fixed width in the source, with each width specified after every column in the table.

JSON 's3://mybucket/jsonpaths.txt' - data is in JSON format, and the path specified is the format of the data.

For more examples, see Amazon Redshift COPY command examples.

" } }, "documentation":"

Describes a COPY command for Amazon Redshift.

" @@ -208,7 +209,15 @@ "members":{ "DeliveryStreamName":{ "shape":"DeliveryStreamName", - "documentation":"

The name of the delivery stream. This name must be unique per AWS account in the same region. You can have multiple delivery streams with the same name if they are in different accounts or different regions.

" + "documentation":"

The name of the delivery stream. This name must be unique per AWS account in the same region. If the delivery streams are in different accounts or different regions, you can have multiple delivery streams with the same name.

" + }, + "DeliveryStreamType":{ + "shape":"DeliveryStreamType", + "documentation":"

The delivery stream type. This parameter can be one of the following values:

" + }, + "KinesisStreamSourceConfiguration":{ + "shape":"KinesisStreamSourceConfiguration", + "documentation":"

When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.

" }, "S3DestinationConfiguration":{ "shape":"S3DestinationConfiguration", @@ -226,6 +235,10 @@ "ElasticsearchDestinationConfiguration":{ "shape":"ElasticsearchDestinationConfiguration", "documentation":"

The destination in Amazon ES. You can specify only one destination.

" + }, + "SplunkDestinationConfiguration":{ + "shape":"SplunkDestinationConfiguration", + "documentation":"

The destination in Splunk. You can specify only one destination.

" } } }, @@ -263,6 +276,7 @@ "members":{ } }, + "DeliveryStartTimestamp":{"type":"timestamp"}, "DeliveryStreamARN":{ "type":"string", "max":512, @@ -275,6 +289,7 @@ "DeliveryStreamName", "DeliveryStreamARN", "DeliveryStreamStatus", + "DeliveryStreamType", "VersionId", "Destinations", "HasMoreDestinations" @@ -292,6 +307,10 @@ "shape":"DeliveryStreamStatus", "documentation":"

The status of the delivery stream.

" }, + "DeliveryStreamType":{ + "shape":"DeliveryStreamType", + "documentation":"

The delivery stream type. This can be one of the following values:

" + }, "VersionId":{ "shape":"DeliveryStreamVersionId", "documentation":"

Each time the destination is updated for a delivery stream, the version ID is changed, and the current version ID is required when updating the destination. This is so that the service knows it is applying the changes to the correct version of the delivery stream.

" @@ -304,6 +323,10 @@ "shape":"Timestamp", "documentation":"

The date and time that the delivery stream was last updated.

" }, + "Source":{ + "shape":"SourceDescription", + "documentation":"

If the DeliveryStreamType parameter is KinesisStreamAsSource, a SourceDescription object describing the source Kinesis stream.

" + }, "Destinations":{ "shape":"DestinationDescriptionList", "documentation":"

The destinations.

" @@ -333,6 +356,13 @@ "ACTIVE" ] }, + "DeliveryStreamType":{ + "type":"string", + "enum":[ + "DirectPut", + "KinesisStreamAsSource" + ] + }, "DeliveryStreamVersionId":{ "type":"string", "max":50, @@ -353,7 +383,7 @@ }, "ExclusiveStartDestinationId":{ "shape":"DestinationId", - "documentation":"

The ID of the destination to start returning the destination information. Currently Firehose supports one destination per delivery stream.

" + "documentation":"

The ID of the destination to start returning the destination information. Currently, Kinesis Firehose supports one destination per delivery stream.

" } } }, @@ -395,6 +425,10 @@ "ElasticsearchDestinationDescription":{ "shape":"ElasticsearchDestinationDescription", "documentation":"

The destination in Amazon ES.

" + }, + "SplunkDestinationDescription":{ + "shape":"SplunkDestinationDescription", + "documentation":"

The destination in Splunk.

" } }, "documentation":"

Describes the destination for a delivery stream.

" @@ -444,7 +478,7 @@ "members":{ "RoleARN":{ "shape":"RoleARN", - "documentation":"

The ARN of the IAM role to be assumed by Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Amazon S3 Bucket Access.

" + "documentation":"

The ARN of the IAM role to be assumed by Kinesis Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Amazon S3 Bucket Access.

" }, "DomainARN":{ "shape":"ElasticsearchDomainARN", @@ -460,7 +494,7 @@ }, "IndexRotationPeriod":{ "shape":"ElasticsearchIndexRotationPeriod", - "documentation":"

The Elasticsearch index rotation period. Index rotation appends a timestamp to the IndexName to facilitate expiration of old data. For more information, see Index Rotation for Amazon Elasticsearch Service Destination. The default value is OneDay.

" + "documentation":"

The Elasticsearch index rotation period. Index rotation appends a time stamp to the IndexName to facilitate the expiration of old data. For more information, see Index Rotation for Amazon Elasticsearch Service Destination. The default value is OneDay.

" }, "BufferingHints":{ "shape":"ElasticsearchBufferingHints", @@ -468,15 +502,15 @@ }, "RetryOptions":{ "shape":"ElasticsearchRetryOptions", - "documentation":"

The retry behavior in the event that Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).

" + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).

" }, "S3BackupMode":{ "shape":"ElasticsearchS3BackupMode", - "documentation":"

Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with elasticsearch-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with elasticsearch-failed/ appended to the prefix. For more information, see Amazon S3 Backup for Amazon Elasticsearch Service Destination. Default value is FailedDocumentsOnly.

" + "documentation":"

Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, Kinesis Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with elasticsearch-failed/ appended to the key prefix. When set to AllDocuments, Kinesis Firehose delivers all incoming records to Amazon S3, and also writes failed documents with elasticsearch-failed/ appended to the prefix. For more information, see Amazon S3 Backup for Amazon Elasticsearch Service Destination. Default value is FailedDocumentsOnly.

" }, "S3Configuration":{ "shape":"S3DestinationConfiguration", - "documentation":"

The configuration for the intermediate Amazon S3 location from which Amazon ES obtains data.

" + "documentation":"

The configuration for the backup Amazon S3 location.

" }, "ProcessingConfiguration":{ "shape":"ProcessingConfiguration", @@ -544,7 +578,7 @@ "members":{ "RoleARN":{ "shape":"RoleARN", - "documentation":"

The ARN of the IAM role to be assumed by Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Amazon S3 Bucket Access.

" + "documentation":"

The ARN of the IAM role to be assumed by Kinesis Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Amazon S3 Bucket Access.

" }, "DomainARN":{ "shape":"ElasticsearchDomainARN", @@ -560,7 +594,7 @@ }, "IndexRotationPeriod":{ "shape":"ElasticsearchIndexRotationPeriod", - "documentation":"

The Elasticsearch index rotation period. Index rotation appends a timestamp to IndexName to facilitate the expiration of old data. For more information, see Index Rotation for Amazon Elasticsearch Service Destination. Default value is OneDay.

" + "documentation":"

The Elasticsearch index rotation period. Index rotation appends a time stamp to IndexName to facilitate the expiration of old data. For more information, see Index Rotation for Amazon Elasticsearch Service Destination. Default value is OneDay.

" }, "BufferingHints":{ "shape":"ElasticsearchBufferingHints", @@ -568,7 +602,7 @@ }, "RetryOptions":{ "shape":"ElasticsearchRetryOptions", - "documentation":"

The retry behavior in the event that Firehose is unable to deliver documents to Amazon ES. Default value is 300 (5 minutes).

" + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon ES. The default value is 300 (5 minutes).

" }, "S3Update":{ "shape":"S3DestinationUpdate", @@ -616,10 +650,10 @@ "members":{ "DurationInSeconds":{ "shape":"ElasticsearchRetryDurationInSeconds", - "documentation":"

After an initial failure to deliver to Amazon ES, the total amount of time during which Firehose re-attempts delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.

" + "documentation":"

After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Firehose re-attempts delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.

" } }, - "documentation":"

Configures retry behavior in the event that Firehose is unable to deliver documents to Amazon ES.

" + "documentation":"

Configures retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon ES.

" }, "ElasticsearchS3BackupMode":{ "type":"string", @@ -638,7 +672,7 @@ "members":{ "NoEncryptionConfig":{ "shape":"NoEncryptionConfig", - "documentation":"

Specifically override existing encryption information to ensure no encryption is used.

" + "documentation":"

Specifically override existing encryption information to ensure that no encryption is used.

" }, "KMSEncryptionConfig":{ "shape":"KMSEncryptionConfig", @@ -666,7 +700,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -719,7 +753,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -765,7 +799,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -798,6 +832,20 @@ }, "documentation":"

Describes an update for a destination in Amazon S3.

" }, + "HECAcknowledgmentTimeoutInSeconds":{ + "type":"integer", + "max":600, + "min":180 + }, + "HECEndpoint":{"type":"string"}, + "HECEndpointType":{ + "type":"string", + "enum":[ + "Raw", + "Event" + ] + }, + "HECToken":{"type":"string"}, "IntervalInSeconds":{ "type":"integer", "max":900, @@ -811,7 +859,7 @@ "documentation":"

A message that provides information about the error.

" } }, - "documentation":"

The specified input parameter has an value that is not valid.

", + "documentation":"

The specified input parameter has a value that is not valid.

", "exception":true }, "KMSEncryptionConfig":{ @@ -825,6 +873,48 @@ }, "documentation":"

Describes an encryption key for a destination in Amazon S3.

" }, + "KinesisStreamARN":{ + "type":"string", + "max":512, + "min":1, + "pattern":"arn:.*" + }, + "KinesisStreamSourceConfiguration":{ + "type":"structure", + "required":[ + "KinesisStreamARN", + "RoleARN" + ], + "members":{ + "KinesisStreamARN":{ + "shape":"KinesisStreamARN", + "documentation":"

The ARN of the source Kinesis stream.

" + }, + "RoleARN":{ + "shape":"RoleARN", + "documentation":"

The ARN of the role that provides access to the source Kinesis stream.

" + } + }, + "documentation":"

The stream and role ARNs for a Kinesis stream used as the source for a delivery stream.

" + }, + "KinesisStreamSourceDescription":{ + "type":"structure", + "members":{ + "KinesisStreamARN":{ + "shape":"KinesisStreamARN", + "documentation":"

The ARN of the source Kinesis stream.

" + }, + "RoleARN":{ + "shape":"RoleARN", + "documentation":"

The ARN of the role used by the source Kinesis stream.

" + }, + "DeliveryStartTimestamp":{ + "shape":"DeliveryStartTimestamp", + "documentation":"

Kinesis Firehose starts retrieving records from the Kinesis stream starting with this time stamp.

" + } + }, + "documentation":"

Details about a Kinesis stream used as the source for a Kinesis Firehose delivery stream.

" + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -841,7 +931,11 @@ "members":{ "Limit":{ "shape":"ListDeliveryStreamsInputLimit", - "documentation":"

The maximum number of delivery streams to list.

" + "documentation":"

The maximum number of delivery streams to list. The default value is 10.

" + }, + "DeliveryStreamType":{ + "shape":"DeliveryStreamType", + "documentation":"

The delivery stream type. This can be one of the following values:

This parameter is optional. If this parameter is omitted, delivery streams of all types are returned.

" }, "ExclusiveStartDeliveryStreamName":{ "shape":"DeliveryStreamName", @@ -946,7 +1040,10 @@ "type":"string", "enum":[ "LambdaArn", - "NumberOfRetries" + "NumberOfRetries", + "RoleArn", + "BufferSizeInMBs", + "BufferIntervalInSeconds" ] }, "ProcessorParameterValue":{ @@ -1097,7 +1194,7 @@ }, "RetryOptions":{ "shape":"RedshiftRetryOptions", - "documentation":"

The retry behavior in the event that Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" }, "S3Configuration":{ "shape":"S3DestinationConfiguration", @@ -1150,7 +1247,7 @@ }, "RetryOptions":{ "shape":"RedshiftRetryOptions", - "documentation":"

The retry behavior in the event that Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" }, "S3DestinationDescription":{ "shape":"S3DestinationDescription", @@ -1200,7 +1297,7 @@ }, "RetryOptions":{ "shape":"RedshiftRetryOptions", - "documentation":"

The retry behavior in the event that Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).

" }, "S3Update":{ "shape":"S3DestinationUpdate", @@ -1235,10 +1332,10 @@ "members":{ "DurationInSeconds":{ "shape":"RedshiftRetryDurationInSeconds", - "documentation":"

The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.

" + "documentation":"

The length of time during which Kinesis Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Kinesis Firehose does not retry if the value of DurationInSeconds is 0 (zero) or if the first delivery attempt takes longer than the current value.

" } }, - "documentation":"

Configures retry behavior in the event that Firehose is unable to deliver documents to Amazon Redshift.

" + "documentation":"

Configures retry behavior in case Kinesis Firehose is unable to deliver documents to Amazon Redshift.

" }, "RedshiftS3BackupMode":{ "type":"string", @@ -1299,7 +1396,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -1340,7 +1437,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -1374,7 +1471,7 @@ }, "Prefix":{ "shape":"Prefix", - "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" + "documentation":"

The \"YYYY/MM/DD/HH\" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. If the prefix ends with a slash, it appears as a folder in the S3 bucket. For more information, see Amazon S3 Object Name Format in the Amazon Kinesis Firehose Developer Guide.

" }, "BufferingHints":{ "shape":"BufferingHints", @@ -1412,6 +1509,170 @@ "max":128, "min":1 }, + "SourceDescription":{ + "type":"structure", + "members":{ + "KinesisStreamSourceDescription":{ + "shape":"KinesisStreamSourceDescription", + "documentation":"

The KinesisStreamSourceDescription value for the source Kinesis stream.

" + } + }, + "documentation":"

Details about a Kinesis stream used as the source for a Kinesis Firehose delivery stream.

" + }, + "SplunkDestinationConfiguration":{ + "type":"structure", + "required":[ + "HECEndpoint", + "HECEndpointType", + "HECToken", + "S3Configuration" + ], + "members":{ + "HECEndpoint":{ + "shape":"HECEndpoint", + "documentation":"

The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data.

" + }, + "HECEndpointType":{ + "shape":"HECEndpointType", + "documentation":"

This type can be either \"Raw\" or \"Event\".

" + }, + "HECToken":{ + "shape":"HECToken", + "documentation":"

This is a GUID you obtain from your Splunk cluster when you create a new HEC endpoint.

" + }, + "HECAcknowledgmentTimeoutInSeconds":{ + "shape":"HECAcknowledgmentTimeoutInSeconds", + "documentation":"

The amount of time that Kinesis Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period Kinesis Firehose either tries to send the data again or considers it an error, based on your retry settings.

" + }, + "RetryOptions":{ + "shape":"SplunkRetryOptions", + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.

" + }, + "S3BackupMode":{ + "shape":"SplunkS3BackupMode", + "documentation":"

Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, Kinesis Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. Default value is FailedDocumentsOnly.

" + }, + "S3Configuration":{ + "shape":"S3DestinationConfiguration", + "documentation":"

The configuration for the backup Amazon S3 location.

" + }, + "ProcessingConfiguration":{ + "shape":"ProcessingConfiguration", + "documentation":"

The data processing configuration.

" + }, + "CloudWatchLoggingOptions":{ + "shape":"CloudWatchLoggingOptions", + "documentation":"

The CloudWatch logging options for your delivery stream.

" + } + }, + "documentation":"

Describes the configuration of a destination in Splunk.

" + }, + "SplunkDestinationDescription":{ + "type":"structure", + "members":{ + "HECEndpoint":{ + "shape":"HECEndpoint", + "documentation":"

The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data.

" + }, + "HECEndpointType":{ + "shape":"HECEndpointType", + "documentation":"

This type can be either \"Raw\" or \"Event\".

" + }, + "HECToken":{ + "shape":"HECToken", + "documentation":"

This is a GUID you obtain from your Splunk cluster when you create a new HEC endpoint.

" + }, + "HECAcknowledgmentTimeoutInSeconds":{ + "shape":"HECAcknowledgmentTimeoutInSeconds", + "documentation":"

The amount of time that Kinesis Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period Kinesis Firehose either tries to send the data again or considers it an error, based on your retry settings.

" + }, + "RetryOptions":{ + "shape":"SplunkRetryOptions", + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.

" + }, + "S3BackupMode":{ + "shape":"SplunkS3BackupMode", + "documentation":"

Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, Kinesis Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. Default value is FailedDocumentsOnly.

" + }, + "S3DestinationDescription":{ + "shape":"S3DestinationDescription", + "documentation":"

The Amazon S3 destination.>

" + }, + "ProcessingConfiguration":{ + "shape":"ProcessingConfiguration", + "documentation":"

The data processing configuration.

" + }, + "CloudWatchLoggingOptions":{ + "shape":"CloudWatchLoggingOptions", + "documentation":"

The CloudWatch logging options for your delivery stream.

" + } + }, + "documentation":"

Describes a destination in Splunk.

" + }, + "SplunkDestinationUpdate":{ + "type":"structure", + "members":{ + "HECEndpoint":{ + "shape":"HECEndpoint", + "documentation":"

The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data.

" + }, + "HECEndpointType":{ + "shape":"HECEndpointType", + "documentation":"

This type can be either \"Raw\" or \"Event\".

" + }, + "HECToken":{ + "shape":"HECToken", + "documentation":"

This is a GUID you obtain from your Splunk cluster when you create a new HEC endpoint.

" + }, + "HECAcknowledgmentTimeoutInSeconds":{ + "shape":"HECAcknowledgmentTimeoutInSeconds", + "documentation":"

The amount of time that Kinesis Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period Kinesis Firehose either tries to send the data again or considers it an error, based on your retry settings.

" + }, + "RetryOptions":{ + "shape":"SplunkRetryOptions", + "documentation":"

The retry behavior in case Kinesis Firehose is unable to deliver data to Splunk or if it doesn't receive an acknowledgment of receipt from Splunk.

" + }, + "S3BackupMode":{ + "shape":"SplunkS3BackupMode", + "documentation":"

Defines how documents should be delivered to Amazon S3. When set to FailedDocumentsOnly, Kinesis Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set to AllDocuments, Kinesis Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. Default value is FailedDocumentsOnly.

" + }, + "S3Update":{ + "shape":"S3DestinationUpdate", + "documentation":"

Your update to the configuration of the backup Amazon S3 location.

" + }, + "ProcessingConfiguration":{ + "shape":"ProcessingConfiguration", + "documentation":"

The data processing configuration.

" + }, + "CloudWatchLoggingOptions":{ + "shape":"CloudWatchLoggingOptions", + "documentation":"

The CloudWatch logging options for your delivery stream.

" + } + }, + "documentation":"

Describes an update for a destination in Splunk.

" + }, + "SplunkRetryDurationInSeconds":{ + "type":"integer", + "max":7200, + "min":0 + }, + "SplunkRetryOptions":{ + "type":"structure", + "members":{ + "DurationInSeconds":{ + "shape":"SplunkRetryDurationInSeconds", + "documentation":"

The total amount of time that Kinesis Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails and doesn't include the periods during which Kinesis Firehose waits for acknowledgment from Splunk after each attempt.

" + } + }, + "documentation":"

Configures retry behavior in case Kinesis Firehose is unable to deliver documents to Splunk or if it doesn't receive an acknowledgment from Splunk.

" + }, + "SplunkS3BackupMode":{ + "type":"string", + "enum":[ + "FailedEventsOnly", + "AllEvents" + ] + }, "Timestamp":{"type":"timestamp"}, "UpdateDestinationInput":{ "type":"structure", @@ -1427,7 +1688,7 @@ }, "CurrentDeliveryStreamVersionId":{ "shape":"DeliveryStreamVersionId", - "documentation":"

Obtain this value from the VersionId result of DeliveryStreamDescription. This value is required, and helps the service to perform conditional operations. For example, if there is a interleaving update and this value is null, then the update destination fails. After the update is successful, the VersionId value is updated. The service then performs a merge of the old configuration with the new configuration.

" + "documentation":"

Obtain this value from the VersionId result of DeliveryStreamDescription. This value is required, and helps the service to perform conditional operations. For example, if there is an interleaving update and this value is null, then the update destination fails. After the update is successful, the VersionId value is updated. The service then performs a merge of the old configuration with the new configuration.

" }, "DestinationId":{ "shape":"DestinationId", @@ -1449,6 +1710,10 @@ "ElasticsearchDestinationUpdate":{ "shape":"ElasticsearchDestinationUpdate", "documentation":"

Describes an update for a destination in Amazon ES.

" + }, + "SplunkDestinationUpdate":{ + "shape":"SplunkDestinationUpdate", + "documentation":"

Describes an update for a destination in Splunk.

" } } }, @@ -1463,5 +1728,5 @@ "sensitive":true } }, - "documentation":"Amazon Kinesis Firehose API Reference

Amazon Kinesis Firehose is a fully-managed service that delivers real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (Amazon ES), and Amazon Redshift.

" + "documentation":"Amazon Kinesis Firehose API Reference

Amazon Kinesis Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Elasticsearch Service (Amazon ES), and Amazon Redshift.

" } diff --git a/services/kms/src/main/resources/codegen-resources/examples-1.json b/services/kms/src/main/resources/codegen-resources/examples-1.json index 39ffbeec24d8..b0a17a5bec4c 100644 --- a/services/kms/src/main/resources/codegen-resources/examples-1.json +++ b/services/kms/src/main/resources/codegen-resources/examples-1.json @@ -83,10 +83,11 @@ "KeyMetadata": { "AWSAccountId": "111122223333", "Arn": "arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", - "CreationDate": "2017-01-09T12:00:07-08:00", + "CreationDate": "2017-07-05T14:04:55-07:00", "Description": "", "Enabled": true, "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", + "KeyManager": "CUSTOMER", "KeyState": "Enabled", "KeyUsage": "ENCRYPT_DECRYPT", "Origin": "AWS_KMS" @@ -166,11 +167,12 @@ "output": { "KeyMetadata": { "AWSAccountId": "111122223333", - "Arn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", - "CreationDate": "2015-10-12T11:45:07-07:00", + "Arn": "arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", + "CreationDate": "2017-07-05T14:04:55-07:00", "Description": "", "Enabled": true, "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab", + "KeyManager": "CUSTOMER", "KeyState": "Enabled", "KeyUsage": "ENCRYPT_DECRYPT", "Origin": "AWS_KMS" diff --git a/services/kms/src/main/resources/codegen-resources/service-2.json b/services/kms/src/main/resources/codegen-resources/service-2.json index f6d99057042a..4edf6661e5fb 100644 --- a/services/kms/src/main/resources/codegen-resources/service-2.json +++ b/services/kms/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"KMS", "serviceFullName":"AWS Key Management Service", + "serviceId":"KMS", "signatureVersion":"v4", "targetPrefix":"TrentService", "uid":"kms-2014-11-01" @@ -27,7 +28,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Cancels the deletion of a customer master key (CMK). When this operation is successful, the CMK is set to the Disabled state. To enable a CMK, use EnableKey.

For more information about scheduling and canceling deletion of a CMK, see Deleting Customer Master Keys in the AWS Key Management Service Developer Guide.

" + "documentation":"

Cancels the deletion of a customer master key (CMK). When this operation is successful, the CMK is set to the Disabled state. To enable a CMK, use EnableKey. You cannot perform this operation on a CMK in a different AWS account.

For more information about scheduling and canceling deletion of a CMK, see Deleting Customer Master Keys in the AWS Key Management Service Developer Guide.

" }, "CreateAlias":{ "name":"CreateAlias", @@ -45,7 +46,7 @@ {"shape":"LimitExceededException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Creates a display name for a customer master key. An alias can be used to identify a key and should be unique. The console enforces a one-to-one mapping between the alias and a key. An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word \"alias\" followed by a forward slash (alias/). An alias that begins with \"aws\" after the forward slash (alias/aws...) is reserved by Amazon Web Services (AWS).

The alias and the key it is mapped to must be in the same AWS account and the same region.

To map an alias to a different key, call UpdateAlias.

" + "documentation":"

Creates a display name for a customer master key (CMK). You can use an alias to identify a CMK in selected operations, such as Encrypt and GenerateDataKey.

Each CMK can have multiple aliases, but each alias points to only one CMK. The alias name must be unique in the AWS account and region. To simplify code that runs in multiple regions, use the same alias name, but point it to a different CMK in each region.

Because an alias is not a property of a CMK, you can delete and change the aliases of a CMK without affecting the CMK. Also, aliases do not appear in the response from the DescribeKey operation. To get the aliases of all CMKs, use the ListAliases operation.

An alias must start with the word alias followed by a forward slash (alias/). The alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). Alias names cannot begin with aws; that alias name prefix is reserved by Amazon Web Services (AWS).

The alias and the CMK it is mapped to must be in the same AWS account and the same region. You cannot perform this operation on an alias in a different AWS account.

To map an existing alias to a different CMK, call UpdateAlias.

" }, "CreateGrant":{ "name":"CreateGrant", @@ -65,7 +66,7 @@ {"shape":"LimitExceededException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Adds a grant to a key to specify who can use the key and under what conditions. Grants are alternate permission mechanisms to key policies.

For more information about grants, see Grants in the AWS Key Management Service Developer Guide.

" + "documentation":"

Adds a grant to a customer master key (CMK). The grant specifies who can use the CMK and under what conditions. When setting permissions, grants are an alternative to key policies.

To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId parameter. For more information about grants, see Grants in the AWS Key Management Service Developer Guide.

" }, "CreateKey":{ "name":"CreateKey", @@ -84,7 +85,7 @@ {"shape":"LimitExceededException"}, {"shape":"TagException"} ], - "documentation":"

Creates a customer master key (CMK).

You can use a CMK to encrypt small amounts of data (4 KiB or less) directly, but CMKs are more commonly used to encrypt data encryption keys (DEKs), which are used to encrypt raw data. For more information about DEKs and the difference between CMKs and DEKs, see the following:

" + "documentation":"

Creates a customer master key (CMK) in the caller's AWS account.

You can use a CMK to encrypt small amounts of data (4 KiB or less) directly, but CMKs are more commonly used to encrypt data encryption keys (DEKs), which are used to encrypt raw data. For more information about DEKs and the difference between CMKs and DEKs, see the following:

You cannot use this operation to create a CMK in a different AWS account.

" }, "Decrypt":{ "name":"Decrypt", @@ -104,7 +105,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Decrypts ciphertext. Ciphertext is plaintext that has been previously encrypted by using any of the following functions:

Note that if a caller has been granted access permissions to all keys (through, for example, IAM user policies that grant Decrypt permission on all resources), then ciphertext encrypted by using keys in other accounts where the key grants access to the caller can be decrypted. To remedy this, we recommend that you do not grant Decrypt access in an IAM user policy. Instead grant Decrypt access only in key policies. If you must grant Decrypt access in an IAM user policy, you should scope the resource to specific keys or to specific trusted accounts.

" + "documentation":"

Decrypts ciphertext. Ciphertext is plaintext that has been previously encrypted by using any of the following operations:

Note that if a caller has been granted access permissions to all keys (through, for example, IAM user policies that grant Decrypt permission on all resources), then ciphertext encrypted by using keys in other accounts where the key grants access to the caller can be decrypted. To remedy this, we recommend that you do not grant Decrypt access in an IAM user policy. Instead grant Decrypt access only in key policies. If you must grant Decrypt access in an IAM user policy, you should scope the resource to specific keys or to specific trusted accounts.

" }, "DeleteAlias":{ "name":"DeleteAlias", @@ -119,7 +120,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Deletes the specified alias. To map an alias to a different key, call UpdateAlias.

" + "documentation":"

Deletes the specified alias. You cannot perform this operation on an alias in a different AWS account.

Because an alias is not a property of a CMK, you can delete and change the aliases of a CMK without affecting the CMK. Also, aliases do not appear in the response from the DescribeKey operation. To get the aliases of all CMKs, use the ListAliases operation.

Each CMK can have multiple aliases. To change the alias of a CMK, use DeleteAlias to delete the current alias and CreateAlias to create a new alias. To associate an existing alias with a different customer master key (CMK), call UpdateAlias.

" }, "DeleteImportedKeyMaterial":{ "name":"DeleteImportedKeyMaterial", @@ -136,7 +137,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Deletes key material that you previously imported and makes the specified customer master key (CMK) unusable. For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide.

When the specified CMK is in the PendingDeletion state, this operation does not change the CMK's state. Otherwise, it changes the CMK's state to PendingImport.

After you delete key material, you can use ImportKeyMaterial to reimport the same key material into the CMK.

" + "documentation":"

Deletes key material that you previously imported. This operation makes the specified customer master key (CMK) unusable. For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide. You cannot perform this operation on a CMK in a different AWS account.

When the specified CMK is in the PendingDeletion state, this operation does not change the CMK's state. Otherwise, it changes the CMK's state to PendingImport.

After you delete key material, you can use ImportKeyMaterial to reimport the same key material into the CMK.

" }, "DescribeKey":{ "name":"DescribeKey", @@ -152,7 +153,7 @@ {"shape":"DependencyTimeoutException"}, {"shape":"KMSInternalException"} ], - "documentation":"

Provides detailed information about the specified customer master key.

" + "documentation":"

Provides detailed information about the specified customer master key (CMK).

To perform this operation on a CMK in a different AWS account, specify the key ARN or alias ARN in the value of the KeyId parameter.

" }, "DisableKey":{ "name":"DisableKey", @@ -168,7 +169,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Sets the state of a customer master key (CMK) to disabled, thereby preventing its use for cryptographic operations. For more information about how key state affects the use of a CMK, see How Key State Affects the Use of a Customer Master Key in the AWS Key Management Service Developer Guide.

" + "documentation":"

Sets the state of a customer master key (CMK) to disabled, thereby preventing its use for cryptographic operations. You cannot perform this operation on a CMK in a different AWS account.

For more information about how key state affects the use of a CMK, see How Key State Affects the Use of a Customer Master Key in the AWS Key Management Service Developer Guide.

" }, "DisableKeyRotation":{ "name":"DisableKeyRotation", @@ -186,7 +187,7 @@ {"shape":"KMSInvalidStateException"}, {"shape":"UnsupportedOperationException"} ], - "documentation":"

Disables rotation of the specified key.

" + "documentation":"

Disables automatic rotation of the key material for the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

" }, "EnableKey":{ "name":"EnableKey", @@ -203,7 +204,7 @@ {"shape":"LimitExceededException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Marks a key as enabled, thereby permitting its use.

" + "documentation":"

Sets the state of a customer master key (CMK) to enabled, thereby permitting its use for cryptographic operations. You cannot perform this operation on a CMK in a different AWS account.

" }, "EnableKeyRotation":{ "name":"EnableKeyRotation", @@ -221,7 +222,7 @@ {"shape":"KMSInvalidStateException"}, {"shape":"UnsupportedOperationException"} ], - "documentation":"

Enables rotation of the specified customer master key.

" + "documentation":"

Enables automatic rotation of the key material for the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

" }, "Encrypt":{ "name":"Encrypt", @@ -241,7 +242,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Encrypts plaintext into ciphertext by using a customer master key. The Encrypt function has two primary use cases:

Unless you are moving encrypted data from one region to another, you don't use this function to encrypt a generated data key within a region. You retrieve data keys already encrypted by calling the GenerateDataKey or GenerateDataKeyWithoutPlaintext function. Data keys don't need to be encrypted again by calling Encrypt.

If you want to encrypt data locally in your application, you can use the GenerateDataKey function to return a plaintext data encryption key and a copy of the key encrypted under the customer master key (CMK) of your choosing.

" + "documentation":"

Encrypts plaintext into ciphertext by using a customer master key (CMK). The Encrypt operation has two primary use cases:

To perform this operation on a CMK in a different AWS account, specify the key ARN or alias ARN in the value of the KeyId parameter.

Unless you are moving encrypted data from one region to another, you don't use this operation to encrypt a generated data key within a region. To get data keys that are already encrypted, call the GenerateDataKey or GenerateDataKeyWithoutPlaintext operation. Data keys don't need to be encrypted again by calling Encrypt.

To encrypt data locally in your application, use the GenerateDataKey operation to return a plaintext data encryption key and a copy of the key encrypted under the CMK of your choosing.

" }, "GenerateDataKey":{ "name":"GenerateDataKey", @@ -261,7 +262,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Returns a data encryption key that you can use in your application to encrypt data locally.

You must specify the customer master key (CMK) under which to generate the data key. You must also specify the length of the data key using either the KeySpec or NumberOfBytes field. You must specify one field or the other, but not both. For common key lengths (128-bit and 256-bit symmetric keys), we recommend that you use KeySpec.

This operation returns a plaintext copy of the data key in the Plaintext field of the response, and an encrypted copy of the data key in the CiphertextBlob field. The data key is encrypted under the CMK specified in the KeyId field of the request.

We recommend that you use the following pattern to encrypt data locally in your application:

  1. Use this operation (GenerateDataKey) to retrieve a data encryption key.

  2. Use the plaintext data encryption key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory.

  3. Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.

To decrypt data locally:

  1. Use the Decrypt operation to decrypt the encrypted data key into a plaintext copy of the data key.

  2. Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.

To return only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To return a random byte string that is cryptographically secure, use GenerateRandom.

If you use the optional EncryptionContext field, you must store at least enough information to be able to reconstruct the full encryption context when you later send the ciphertext to the Decrypt operation. It is a good practice to choose an encryption context that you can reconstruct on the fly to better secure the ciphertext. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.

" + "documentation":"

Returns a data encryption key that you can use in your application to encrypt data locally.

You must specify the customer master key (CMK) under which to generate the data key. You must also specify the length of the data key using either the KeySpec or NumberOfBytes field. You must specify one field or the other, but not both. For common key lengths (128-bit and 256-bit symmetric keys), we recommend that you use KeySpec. To perform this operation on a CMK in a different AWS account, specify the key ARN or alias ARN in the value of the KeyId parameter.

This operation returns a plaintext copy of the data key in the Plaintext field of the response, and an encrypted copy of the data key in the CiphertextBlob field. The data key is encrypted under the CMK specified in the KeyId field of the request.

We recommend that you use the following pattern to encrypt data locally in your application:

  1. Use this operation (GenerateDataKey) to get a data encryption key.

  2. Use the plaintext data encryption key (returned in the Plaintext field of the response) to encrypt data locally, then erase the plaintext data key from memory.

  3. Store the encrypted data key (returned in the CiphertextBlob field of the response) alongside the locally encrypted data.

To decrypt data locally:

  1. Use the Decrypt operation to decrypt the encrypted data key into a plaintext copy of the data key.

  2. Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.

To return only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To return a random byte string that is cryptographically secure, use GenerateRandom.

If you use the optional EncryptionContext field, you must store at least enough information to be able to reconstruct the full encryption context when you later send the ciphertext to the Decrypt operation. It is a good practice to choose an encryption context that you can reconstruct on the fly to better secure the ciphertext. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.

" }, "GenerateDataKeyWithoutPlaintext":{ "name":"GenerateDataKeyWithoutPlaintext", @@ -281,7 +282,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Returns a data encryption key encrypted under a customer master key (CMK). This operation is identical to GenerateDataKey but returns only the encrypted copy of the data key.

This operation is useful in a system that has multiple components with different degrees of trust. For example, consider a system that stores encrypted data in containers. Each container stores the encrypted data and an encrypted copy of the data key. One component of the system, called the control plane, creates new containers. When it creates a new container, it uses this operation (GenerateDataKeyWithoutPlaintext) to get an encrypted data key and then stores it in the container. Later, a different component of the system, called the data plane, puts encrypted data into the containers. To do this, it passes the encrypted data key to the Decrypt operation, then uses the returned plaintext data key to encrypt data, and finally stores the encrypted data in the container. In this system, the control plane never sees the plaintext data key.

" + "documentation":"

Returns a data encryption key encrypted under a customer master key (CMK). This operation is identical to GenerateDataKey but returns only the encrypted copy of the data key.

To perform this operation on a CMK in a different AWS account, specify the key ARN or alias ARN in the value of the KeyId parameter.

This operation is useful in a system that has multiple components with different degrees of trust. For example, consider a system that stores encrypted data in containers. Each container stores the encrypted data and an encrypted copy of the data key. One component of the system, called the control plane, creates new containers. When it creates a new container, it uses this operation (GenerateDataKeyWithoutPlaintext) to get an encrypted data key and then stores it in the container. Later, a different component of the system, called the data plane, puts encrypted data into the containers. To do this, it passes the encrypted data key to the Decrypt operation, then uses the returned plaintext data key to encrypt data, and finally stores the encrypted data in the container. In this system, the control plane never sees the plaintext data key.

" }, "GenerateRandom":{ "name":"GenerateRandom", @@ -312,7 +313,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Retrieves a policy attached to the specified key.

" + "documentation":"

Gets a key policy attached to the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

" }, "GetKeyRotationStatus":{ "name":"GetKeyRotationStatus", @@ -330,7 +331,7 @@ {"shape":"KMSInvalidStateException"}, {"shape":"UnsupportedOperationException"} ], - "documentation":"

Retrieves a Boolean value that indicates whether key rotation is enabled for the specified key.

" + "documentation":"

Gets a Boolean value that indicates whether automatic rotation of the key material is enabled for the specified customer master key (CMK).

To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId parameter.

" }, "GetParametersForImport":{ "name":"GetParametersForImport", @@ -348,7 +349,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Returns the items you need in order to import key material into AWS KMS from your existing key management infrastructure. For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide.

You must specify the key ID of the customer master key (CMK) into which you will import key material. This CMK's Origin must be EXTERNAL. You must also specify the wrapping algorithm and type of wrapping key (public key) that you will use to encrypt the key material.

This operation returns a public key and an import token. Use the public key to encrypt the key material. Store the import token to send with a subsequent ImportKeyMaterial request. The public key and import token from the same response must be used together. These items are valid for 24 hours, after which they cannot be used for a subsequent ImportKeyMaterial request. To retrieve new ones, send another GetParametersForImport request.

" + "documentation":"

Returns the items you need in order to import key material into AWS KMS from your existing key management infrastructure. For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide.

You must specify the key ID of the customer master key (CMK) into which you will import key material. This CMK's Origin must be EXTERNAL. You must also specify the wrapping algorithm and type of wrapping key (public key) that you will use to encrypt the key material. You cannot perform this operation on a CMK in a different AWS account.

This operation returns a public key and an import token. Use the public key to encrypt the key material. Store the import token to send with a subsequent ImportKeyMaterial request. The public key and import token from the same response must be used together. These items are valid for 24 hours. When they expire, they cannot be used for a subsequent ImportKeyMaterial request. To get new ones, send another GetParametersForImport request.

" }, "ImportKeyMaterial":{ "name":"ImportKeyMaterial", @@ -370,7 +371,7 @@ {"shape":"ExpiredImportTokenException"}, {"shape":"InvalidImportTokenException"} ], - "documentation":"

Imports key material into an AWS KMS customer master key (CMK) from your existing key management infrastructure. For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide.

You must specify the key ID of the CMK to import the key material into. This CMK's Origin must be EXTERNAL. You must also send an import token and the encrypted key material. Send the import token that you received in the same GetParametersForImport response that contained the public key that you used to encrypt the key material. You must also specify whether the key material expires and if so, when. When the key material expires, AWS KMS deletes the key material and the CMK becomes unusable. To use the CMK again, you can reimport the same key material. If you set an expiration date, you can change it only by reimporting the same key material and specifying a new expiration date.

When this operation is successful, the specified CMK's key state changes to Enabled, and you can use the CMK.

After you successfully import key material into a CMK, you can reimport the same key material into that CMK, but you cannot import different key material.

" + "documentation":"

Imports key material into an existing AWS KMS customer master key (CMK) that was created without key material. You cannot perform this operation on a CMK in a different AWS account. For more information about creating CMKs with no key material and then importing key material, see Importing Key Material in the AWS Key Management Service Developer Guide.

Before using this operation, call GetParametersForImport. Its response includes a public key and an import token. Use the public key to encrypt the key material. Then, submit the import token from the same GetParametersForImport response.

When calling this operation, you must specify the following values:

When this operation is successful, the CMK's key state changes from PendingImport to Enabled, and you can use the CMK. After you successfully import key material into a CMK, you can reimport the same key material into that CMK, but you cannot import different key material.

" }, "ListAliases":{ "name":"ListAliases", @@ -385,7 +386,7 @@ {"shape":"InvalidMarkerException"}, {"shape":"KMSInternalException"} ], - "documentation":"

Lists all of the key aliases in the account.

" + "documentation":"

Gets a list of all aliases in the caller's AWS account and region. You cannot list aliases in other accounts. For more information about aliases, see CreateAlias.

The response might include several aliases that do not have a TargetKeyId field because they are not associated with a CMK. These are predefined aliases that are reserved for CMKs managed by AWS services. If an alias is not associated with a CMK, the alias does not count against the alias limit for your account.

" }, "ListGrants":{ "name":"ListGrants", @@ -403,7 +404,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

List the grants for a specified key.

" + "documentation":"

Gets a list of all grants for the specified customer master key (CMK).

To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId parameter.

" }, "ListKeyPolicies":{ "name":"ListKeyPolicies", @@ -420,7 +421,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Retrieves a list of policies attached to a key.

" + "documentation":"

Gets the names of the key policies that are attached to a customer master key (CMK). This operation is designed to get policy names that you can use in a GetKeyPolicy operation. However, the only valid policy name is default. You cannot perform this operation on a CMK in a different AWS account.

" }, "ListKeys":{ "name":"ListKeys", @@ -435,7 +436,7 @@ {"shape":"KMSInternalException"}, {"shape":"InvalidMarkerException"} ], - "documentation":"

Lists the customer master keys.

" + "documentation":"

Gets a list of all customer master keys (CMKs) in the caller's AWS account and region.

" }, "ListResourceTags":{ "name":"ListResourceTags", @@ -451,7 +452,7 @@ {"shape":"InvalidArnException"}, {"shape":"InvalidMarkerException"} ], - "documentation":"

Returns a list of all tags for the specified customer master key (CMK).

" + "documentation":"

Returns a list of all tags for the specified customer master key (CMK).

You cannot perform this operation on a CMK in a different AWS account.

" }, "ListRetirableGrants":{ "name":"ListRetirableGrants", @@ -487,7 +488,7 @@ {"shape":"LimitExceededException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Attaches a key policy to the specified customer master key (CMK).

For more information about key policies, see Key Policies in the AWS Key Management Service Developer Guide.

" + "documentation":"

Attaches a key policy to the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

For more information about key policies, see Key Policies in the AWS Key Management Service Developer Guide.

" }, "ReEncrypt":{ "name":"ReEncrypt", @@ -508,7 +509,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Encrypts data on the server side with a new customer master key (CMK) without exposing the plaintext of the data on the client side. The data is first decrypted and then reencrypted. You can also use this operation to change the encryption context of a ciphertext.

Unlike other operations, ReEncrypt is authorized twice, once as ReEncryptFrom on the source CMK and once as ReEncryptTo on the destination CMK. We recommend that you include the \"kms:ReEncrypt*\" permission in your key policies to permit reencryption from or to the CMK. This permission is automatically included in the key policy when you create a CMK through the console, but you must include it manually when you create a CMK programmatically or when you set a key policy with the PutKeyPolicy operation.

" + "documentation":"

Encrypts data on the server side with a new customer master key (CMK) without exposing the plaintext of the data on the client side. The data is first decrypted and then reencrypted. You can also use this operation to change the encryption context of a ciphertext.

You can reencrypt data using CMKs in different AWS accounts.

Unlike other operations, ReEncrypt is authorized twice, once as ReEncryptFrom on the source CMK and once as ReEncryptTo on the destination CMK. We recommend that you include the \"kms:ReEncrypt*\" permission in your key policies to permit reencryption from or to the CMK. This permission is automatically included in the key policy when you create a CMK through the console, but you must include it manually when you create a CMK programmatically or when you set a key policy with the PutKeyPolicy operation.

" }, "RetireGrant":{ "name":"RetireGrant", @@ -542,7 +543,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Revokes a grant. You can revoke a grant to actively deny operations that depend on it.

" + "documentation":"

Revokes the specified grant for the specified customer master key (CMK). You can revoke a grant to actively deny operations that depend on it.

To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId parameter.

" }, "ScheduleKeyDeletion":{ "name":"ScheduleKeyDeletion", @@ -559,7 +560,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Schedules the deletion of a customer master key (CMK). You may provide a waiting period, specified in days, before deletion occurs. If you do not provide a waiting period, the default period of 30 days is used. When this operation is successful, the state of the CMK changes to PendingDeletion. Before the waiting period ends, you can use CancelKeyDeletion to cancel the deletion of the CMK. After the waiting period ends, AWS KMS deletes the CMK and all AWS KMS data associated with it, including all aliases that refer to it.

Deleting a CMK is a destructive and potentially dangerous operation. When a CMK is deleted, all data that was encrypted under the CMK is rendered unrecoverable. To restrict the use of a CMK without deleting it, use DisableKey.

For more information about scheduling a CMK for deletion, see Deleting Customer Master Keys in the AWS Key Management Service Developer Guide.

" + "documentation":"

Schedules the deletion of a customer master key (CMK). You may provide a waiting period, specified in days, before deletion occurs. If you do not provide a waiting period, the default period of 30 days is used. When this operation is successful, the state of the CMK changes to PendingDeletion. Before the waiting period ends, you can use CancelKeyDeletion to cancel the deletion of the CMK. After the waiting period ends, AWS KMS deletes the CMK and all AWS KMS data associated with it, including all aliases that refer to it.

You cannot perform this operation on a CMK in a different AWS account.

Deleting a CMK is a destructive and potentially dangerous operation. When a CMK is deleted, all data that was encrypted under the CMK is rendered unrecoverable. To restrict the use of a CMK without deleting it, use DisableKey.

For more information about scheduling a CMK for deletion, see Deleting Customer Master Keys in the AWS Key Management Service Developer Guide.

" }, "TagResource":{ "name":"TagResource", @@ -576,7 +577,7 @@ {"shape":"LimitExceededException"}, {"shape":"TagException"} ], - "documentation":"

Adds or overwrites one or more tags for the specified customer master key (CMK).

Each tag consists of a tag key and a tag value. Tag keys and tag values are both required, but tag values can be empty (null) strings.

You cannot use the same tag key more than once per CMK. For example, consider a CMK with one tag whose tag key is Purpose and tag value is Test. If you send a TagResource request for this CMK with a tag key of Purpose and a tag value of Prod, it does not create a second tag. Instead, the original tag is overwritten with the new tag value.

" + "documentation":"

Adds or overwrites one or more tags for the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

Each tag consists of a tag key and a tag value. Tag keys and tag values are both required, but tag values can be empty (null) strings.

You cannot use the same tag key more than once per CMK. For example, consider a CMK with one tag whose tag key is Purpose and tag value is Test. If you send a TagResource request for this CMK with a tag key of Purpose and a tag value of Prod, it does not create a second tag. Instead, the original tag is overwritten with the new tag value.

For information about the rules that apply to tag keys and tag values, see User-Defined Tag Restrictions in the AWS Billing and Cost Management User Guide.

" }, "UntagResource":{ "name":"UntagResource", @@ -592,7 +593,7 @@ {"shape":"KMSInvalidStateException"}, {"shape":"TagException"} ], - "documentation":"

Removes the specified tag or tags from the specified customer master key (CMK).

To remove a tag, you specify the tag key for each tag to remove. You do not specify the tag value. To overwrite the tag value for an existing tag, use TagResource.

" + "documentation":"

Removes the specified tag or tags from the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.

To remove a tag, you specify the tag key for each tag to remove. You do not specify the tag value. To overwrite the tag value for an existing tag, use TagResource.

" }, "UpdateAlias":{ "name":"UpdateAlias", @@ -607,7 +608,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Updates an alias to map it to a different key.

An alias is not a property of a key. Therefore, an alias can be mapped to and unmapped from an existing key without changing the properties of the key.

An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word \"alias\" followed by a forward slash (alias/). An alias that begins with \"aws\" after the forward slash (alias/aws...) is reserved by Amazon Web Services (AWS).

The alias and the key it is mapped to must be in the same AWS account and the same region.

" + "documentation":"

Associates an existing alias with a different customer master key (CMK). Each CMK can have multiple aliases, but the aliases must be unique within the account and region. You cannot perform this operation on an alias in a different AWS account.

This operation works only on existing aliases. To change the alias of a CMK to a new value, use CreateAlias to create a new alias and DeleteAlias to delete the old alias.

Because an alias is not a property of a CMK, you can create, update, and delete the aliases of a CMK without affecting the CMK. Also, aliases do not appear in the response from the DescribeKey operation. To get the aliases of all CMKs in the account, use the ListAliases operation.

An alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). An alias must start with the word alias followed by a forward slash (alias/). The alias name can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). Alias names cannot begin with aws; that alias name prefix is reserved by Amazon Web Services (AWS).

" }, "UpdateKeyDescription":{ "name":"UpdateKeyDescription", @@ -623,7 +624,7 @@ {"shape":"KMSInternalException"}, {"shape":"KMSInvalidStateException"} ], - "documentation":"

Updates the description of a customer master key (CMK).

" + "documentation":"

Updates the description of a customer master key (CMK). To see the decription of a CMK, use DescribeKey.

You cannot perform this operation on a CMK in a different AWS account.

" } }, "shapes":{ @@ -684,7 +685,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The unique identifier for the customer master key (CMK) for which to cancel deletion.

To specify this value, use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

To obtain the unique key ID and key ARN for a given CMK, use ListKeys or DescribeKey.

" + "documentation":"

The unique identifier for the customer master key (CMK) for which to cancel deletion.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -715,7 +716,7 @@ }, "TargetKeyId":{ "shape":"KeyIdType", - "documentation":"

An identifier of the key for which you are creating the alias. This value cannot be another alias but can be a globally unique identifier or a fully specified ARN to a key.

" + "documentation":"

Identifies the CMK for which you are creating the alias. This value cannot be an alias.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -723,12 +724,13 @@ "type":"structure", "required":[ "KeyId", - "GranteePrincipal" + "GranteePrincipal", + "Operations" ], "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The unique identifier for the customer master key (CMK) that the grant applies to.

To specify this value, use the globally unique key ID or the Amazon Resource Name (ARN) of the key. Examples:

" + "documentation":"

The unique identifier for the customer master key (CMK) that the grant applies to.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a different AWS account, you must use the key ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "GranteePrincipal":{ "shape":"PrincipalIdType", @@ -774,7 +776,7 @@ "members":{ "Policy":{ "shape":"PolicyType", - "documentation":"

The key policy to attach to the CMK.

If you specify a policy and do not set BypassPolicyLockoutSafetyCheck to true, the policy must meet the following criteria:

If you do not specify a policy, AWS KMS attaches a default key policy to the CMK. For more information, see Default Key Policy in the AWS Key Management Service Developer Guide.

The policy size limit is 32 KiB (32768 bytes).

" + "documentation":"

The key policy to attach to the CMK.

If you specify a policy and do not set BypassPolicyLockoutSafetyCheck to true, the policy must meet the following criteria:

If you do not specify a policy, AWS KMS attaches a default key policy to the CMK. For more information, see Default Key Policy in the AWS Key Management Service Developer Guide.

The policy size limit is 32 kilobytes (32768 bytes).

" }, "Description":{ "shape":"DescriptionType", @@ -842,7 +844,7 @@ }, "Plaintext":{ "shape":"PlaintextType", - "documentation":"

Decrypted plaintext data. This value may not be returned if the customer master key is not available or if you didn't have permission to use it.

" + "documentation":"

Decrypted plaintext data. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" } } }, @@ -852,7 +854,7 @@ "members":{ "AliasName":{ "shape":"AliasNameType", - "documentation":"

The alias to be deleted. The name must start with the word \"alias\" followed by a forward slash (alias/). Aliases that begin with \"alias/AWS\" are reserved.

" + "documentation":"

The alias to be deleted. The name must start with the word \"alias\" followed by a forward slash (alias/). Aliases that begin with \"alias/aws\" are reserved.

" } } }, @@ -862,7 +864,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The identifier of the CMK whose key material to delete. The CMK's Origin must be EXTERNAL.

A valid identifier is the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

The identifier of the CMK whose key material to delete. The CMK's Origin must be EXTERNAL.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -881,7 +883,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

" + "documentation":"

A unique identifier for the customer master key (CMK).

To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.

" }, "GrantTokens":{ "shape":"GrantTokenList", @@ -909,7 +911,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK.

Use the CMK's unique identifier or its Amazon Resource Name (ARN). For example:

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -919,7 +921,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -937,7 +939,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -947,7 +949,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -960,7 +962,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

" + "documentation":"

A unique identifier for the customer master key (CMK).

To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.

" }, "Plaintext":{ "shape":"PlaintextType", @@ -981,7 +983,7 @@ "members":{ "CiphertextBlob":{ "shape":"CiphertextType", - "documentation":"

The encrypted plaintext. If you are using the CLI, the value is Base64 encoded. Otherwise, it is not encoded.

" + "documentation":"

The encrypted plaintext. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" }, "KeyId":{ "shape":"KeyIdType", @@ -1009,7 +1011,7 @@ "members":{ "message":{"shape":"ErrorMessageType"} }, - "documentation":"

The request was rejected because the provided import token is expired. Use GetParametersForImport to retrieve a new import token and public key, use the new public key to encrypt the key material, and then try the request again.

", + "documentation":"

The request was rejected because the provided import token is expired. Use GetParametersForImport to get a new import token and public key, use the new public key to encrypt the key material, and then try the request again.

", "exception":true }, "GenerateDataKeyRequest":{ @@ -1018,7 +1020,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The identifier of the CMK under which to generate and encrypt the data encryption key.

A valid identifier is the unique key ID or the Amazon Resource Name (ARN) of the CMK, or the alias name or ARN of an alias that refers to the CMK. Examples:

" + "documentation":"

The identifier of the CMK under which to generate and encrypt the data encryption key.

To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.

" }, "EncryptionContext":{ "shape":"EncryptionContextType", @@ -1043,11 +1045,11 @@ "members":{ "CiphertextBlob":{ "shape":"CiphertextType", - "documentation":"

The encrypted data encryption key.

" + "documentation":"

The encrypted data encryption key. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" }, "Plaintext":{ "shape":"PlaintextType", - "documentation":"

The data encryption key. Use this data key for local encryption and decryption, then remove it from memory as soon as possible.

" + "documentation":"

The data encryption key. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded. Use this data key for local encryption and decryption, then remove it from memory as soon as possible.

" }, "KeyId":{ "shape":"KeyIdType", @@ -1061,7 +1063,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The identifier of the CMK under which to generate and encrypt the data encryption key.

A valid identifier is the unique key ID or the Amazon Resource Name (ARN) of the CMK, or the alias name or ARN of an alias that refers to the CMK. Examples:

" + "documentation":"

The identifier of the customer master key (CMK) under which to generate and encrypt the data encryption key.

To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.

" }, "EncryptionContext":{ "shape":"EncryptionContextType", @@ -1086,7 +1088,7 @@ "members":{ "CiphertextBlob":{ "shape":"CiphertextType", - "documentation":"

The encrypted data encryption key.

" + "documentation":"

The encrypted data encryption key. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" }, "KeyId":{ "shape":"KeyIdType", @@ -1108,7 +1110,7 @@ "members":{ "Plaintext":{ "shape":"PlaintextType", - "documentation":"

The random byte string.

" + "documentation":"

The random byte string. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" } } }, @@ -1121,11 +1123,11 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "PolicyName":{ "shape":"PolicyNameType", - "documentation":"

String that contains the name of the policy. Currently, this must be \"default\". Policy names can be discovered by calling ListKeyPolicies.

" + "documentation":"

Specifies the name of the policy. The only valid name is default. To get the names of key policies, use ListKeyPolicies.

" } } }, @@ -1144,7 +1146,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a different AWS account, you must use the key ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -1167,7 +1169,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The identifier of the CMK into which you will import key material. The CMK's Origin must be EXTERNAL.

A valid identifier is the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

The identifier of the CMK into which you will import key material. The CMK's Origin must be EXTERNAL.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "WrappingAlgorithm":{ "shape":"AlgorithmSpec", @@ -1196,7 +1198,7 @@ }, "ParametersValidTo":{ "shape":"DateType", - "documentation":"

The time at which the import token and public key are no longer valid. After this time, you cannot use them to make an ImportKeyMaterial request and you must send another GetParametersForImport request to retrieve new ones.

" + "documentation":"

The time at which the import token and public key are no longer valid. After this time, you cannot use them to make an ImportKeyMaterial request and you must send another GetParametersForImport request to get new ones.

" } } }, @@ -1310,7 +1312,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The identifier of the CMK to import the key material into. The CMK's Origin must be EXTERNAL.

A valid identifier is the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

The identifier of the CMK to import the key material into. The CMK's Origin must be EXTERNAL.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "ImportToken":{ "shape":"CiphertextType", @@ -1364,7 +1366,7 @@ "members":{ "message":{"shape":"ErrorMessageType"} }, - "documentation":"

The request was rejected because the specified ciphertext has been corrupted or is otherwise invalid.

", + "documentation":"

The request was rejected because the specified ciphertext, or additional authenticated data incorporated into the ciphertext, such as the encryption context, is corrupted, missing, or otherwise invalid.

", "exception":true }, "InvalidGrantIdException":{ @@ -1446,6 +1448,13 @@ }, "documentation":"

Contains information about each entry in the key list.

" }, + "KeyManagerType":{ + "type":"string", + "enum":[ + "AWS", + "CUSTOMER" + ] + }, "KeyMetadata":{ "type":"structure", "required":["KeyId"], @@ -1497,6 +1506,10 @@ "ExpirationModel":{ "shape":"ExpirationModelType", "documentation":"

Specifies whether the CMK's key material expires. This value is present only when Origin is EXTERNAL, otherwise this value is omitted.

" + }, + "KeyManager":{ + "shape":"KeyManagerType", + "documentation":"

The CMK's manager. CMKs are either customer-managed or AWS-managed. For more information about the difference, see Customer Master Keys in the AWS Key Management Service Developer Guide.

" } }, "documentation":"

Contains metadata about a customer master key (CMK).

This data type is used as a response element for the CreateKey and DescribeKey operations.

" @@ -1554,7 +1567,7 @@ "members":{ "Aliases":{ "shape":"AliasList", - "documentation":"

A list of key aliases in the user's account.

" + "documentation":"

A list of aliases.

" }, "NextMarker":{ "shape":"MarkerType", @@ -1562,7 +1575,7 @@ }, "Truncated":{ "shape":"BooleanType", - "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To retrieve more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" + "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To get more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" } } }, @@ -1580,7 +1593,7 @@ }, "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a different AWS account, you must use the key ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" } } }, @@ -1597,7 +1610,7 @@ }, "Truncated":{ "shape":"BooleanType", - "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To retrieve more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" + "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To get more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" } } }, @@ -1607,7 +1620,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key (CMK). You can use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "Limit":{ "shape":"LimitType", @@ -1632,7 +1645,7 @@ }, "Truncated":{ "shape":"BooleanType", - "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To retrieve more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" + "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To get more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" } } }, @@ -1654,7 +1667,7 @@ "members":{ "Keys":{ "shape":"KeyList", - "documentation":"

A list of keys.

" + "documentation":"

A list of customer master keys (CMKs).

" }, "NextMarker":{ "shape":"MarkerType", @@ -1662,7 +1675,7 @@ }, "Truncated":{ "shape":"BooleanType", - "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To retrieve more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" + "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To get more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" } } }, @@ -1672,7 +1685,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK whose tags you are listing. You can use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "Limit":{ "shape":"LimitType", @@ -1697,7 +1710,7 @@ }, "Truncated":{ "shape":"BooleanType", - "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To retrieve more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" + "documentation":"

A flag that indicates whether there are more items in the list. When this value is true, the list in this response is truncated. To get more items, pass the value of the NextMarker element in this response to the Marker parameter in a subsequent request.

" } } }, @@ -1729,7 +1742,7 @@ }, "MarkerType":{ "type":"string", - "max":320, + "max":1024, "min":1, "pattern":"[\\u0020-\\u00FF]*" }, @@ -1795,15 +1808,15 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK.

Use the CMK's unique identifier or its Amazon Resource Name (ARN). For example:

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "PolicyName":{ "shape":"PolicyNameType", - "documentation":"

The name of the key policy.

This value must be default.

" + "documentation":"

The name of the key policy. The only valid value is default.

" }, "Policy":{ "shape":"PolicyType", - "documentation":"

The key policy to attach to the CMK.

If you do not set BypassPolicyLockoutSafetyCheck to true, the policy must meet the following criteria:

The policy size limit is 32 KiB (32768 bytes).

" + "documentation":"

The key policy to attach to the CMK.

If you do not set BypassPolicyLockoutSafetyCheck to true, the policy must meet the following criteria:

The policy size limit is 32 kilobytes (32768 bytes).

" }, "BypassPolicyLockoutSafetyCheck":{ "shape":"BooleanType", @@ -1828,7 +1841,7 @@ }, "DestinationKeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK to use to reencrypt the data. This value can be a globally unique identifier, a fully specified ARN to either an alias or a key, or an alias name prefixed by \"alias/\".

" + "documentation":"

A unique identifier for the CMK that is used to reencrypt the data.

To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.

" }, "DestinationEncryptionContext":{ "shape":"EncryptionContextType", @@ -1845,7 +1858,7 @@ "members":{ "CiphertextBlob":{ "shape":"CiphertextType", - "documentation":"

The reencrypted data.

" + "documentation":"

The reencrypted data. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not encoded.

" }, "SourceKeyId":{ "shape":"KeyIdType", @@ -1866,7 +1879,7 @@ }, "KeyId":{ "shape":"KeyIdType", - "documentation":"

The Amazon Resource Name of the CMK associated with the grant. Example:

" + "documentation":"

The Amazon Resource Name (ARN) of the CMK associated with the grant.

For example: arn:aws:kms:us-east-2:444455556666:key/1234abcd-12ab-34cd-56ef-1234567890ab

" }, "GrantId":{ "shape":"GrantIdType", @@ -1883,7 +1896,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the customer master key associated with the grant. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key associated with the grant.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK. To specify a CMK in a different AWS account, you must use the key ARN.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "GrantId":{ "shape":"GrantIdType", @@ -1897,7 +1910,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

The unique identifier for the customer master key (CMK) to delete.

To specify this value, use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

To obtain the unique key ID and key ARN for a given CMK, use ListKeys or DescribeKey.

" + "documentation":"

The unique identifier of the customer master key (CMK) to delete.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "PendingWindowInDays":{ "shape":"PendingWindowInDaysType", @@ -1934,7 +1947,7 @@ "documentation":"

The value of the tag.

" } }, - "documentation":"

A key-value pair. A tag consists of a tag key and a tag value. Tag keys and tag values are both required, but tag values can be empty (null) strings.

" + "documentation":"

A key-value pair. A tag consists of a tag key and a tag value. Tag keys and tag values are both required, but tag values can be empty (null) strings.

For information about the rules that apply to tag keys and tag values, see User-Defined Tag Restrictions in the AWS Billing and Cost Management User Guide.

" }, "TagException":{ "type":"structure", @@ -1966,7 +1979,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK you are tagging. You can use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

A unique identifier for the CMK you are tagging.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "Tags":{ "shape":"TagList", @@ -1996,7 +2009,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK from which you are removing tags. You can use the unique key ID or the Amazon Resource Name (ARN) of the CMK. Examples:

" + "documentation":"

A unique identifier for the CMK from which you are removing tags.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "TagKeys":{ "shape":"TagKeyList", @@ -2017,7 +2030,7 @@ }, "TargetKeyId":{ "shape":"KeyIdType", - "documentation":"

Unique identifier of the customer master key to be mapped to the alias. This value can be a globally unique identifier or the fully specified ARN of a key.

You can call ListAliases to verify that the alias is mapped to the correct TargetKeyId.

" + "documentation":"

Unique identifier of the customer master key to be mapped to the alias.

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

To verify that the alias is mapped to the correct CMK, use ListAliases.

" } } }, @@ -2030,7 +2043,7 @@ "members":{ "KeyId":{ "shape":"KeyIdType", - "documentation":"

A unique identifier for the CMK. This value can be a globally unique identifier or the fully specified ARN to a key.

" + "documentation":"

A unique identifier for the customer master key (CMK).

Specify the key ID or the Amazon Resource Name (ARN) of the CMK.

For example:

To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey.

" }, "Description":{ "shape":"DescriptionType", diff --git a/services/lambda/src/main/resources/codegen-resources/service-2.json b/services/lambda/src/main/resources/codegen-resources/service-2.json index ba7793066617..1ecf85c5f89b 100644 --- a/services/lambda/src/main/resources/codegen-resources/service-2.json +++ b/services/lambda/src/main/resources/codegen-resources/service-2.json @@ -327,9 +327,10 @@ "output":{"shape":"ListFunctionsResponse"}, "errors":[ {"shape":"ServiceException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"InvalidParameterValueException"} ], - "documentation":"

Returns a list of your Lambda functions. For each function, the response includes the function configuration information. You must use GetFunction to retrieve the code for your function.

This operation requires permission for the lambda:ListFunctions action.

If you are using versioning feature, the response returns list of $LATEST versions of your functions. For information about the versioning feature, see AWS Lambda Function Versioning and Aliases.

" + "documentation":"

Returns a list of your Lambda functions. For each function, the response includes the function configuration information. You must use GetFunction to retrieve the code for your function.

This operation requires permission for the lambda:ListFunctions action.

If you are using the versioning feature, you can list all of your functions or only $LATEST versions. For information about the versioning feature, see AWS Lambda Function Versioning and Aliases.

" }, "ListTags":{ "name":"ListTags", @@ -1057,11 +1058,11 @@ "type":"structure", "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

The name of the function. Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

" }, "FunctionArn":{ - "shape":"FunctionArn", + "shape":"NameSpacedFunctionArn", "documentation":"

The Amazon Resource Name (ARN) assigned to the function.

" }, "Runtime":{ @@ -1123,6 +1124,10 @@ "TracingConfig":{ "shape":"TracingConfigResponse", "documentation":"

The parent object that contains your function's tracing settings.

" + }, + "MasterArn":{ + "shape":"FunctionArn", + "documentation":"

Returns the ARN (Amazon Resource Name) of the master function.

" } }, "documentation":"

A complex type that describes function metadata.

" @@ -1137,6 +1142,10 @@ "min":1, "pattern":"(arn:aws:lambda:)?([a-z]{2}-[a-z]+-\\d{1}:)?(\\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\\$LATEST|[a-zA-Z0-9-_]+))?" }, + "FunctionVersion":{ + "type":"string", + "enum":["ALL"] + }, "GetAccountSettingsRequest":{ "type":"structure", "members":{ @@ -1188,7 +1197,7 @@ "required":["FunctionName"], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

The name of the Lambda function for which you want to retrieve the configuration information.

You can specify a function name (for example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1207,7 +1216,7 @@ "required":["FunctionName"], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

The Lambda function name.

You can specify a function name (for example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1238,7 +1247,7 @@ "required":["FunctionName"], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

Function name whose resource policy you want to retrieve.

You can specify the function name (for example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). If you are using versioning, you can also provide a qualified function ARN (ARN that is qualified with function version or alias name as suffix). AWS Lambda also allows you to specify only the function name with the account ID qualifier (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1346,7 +1355,7 @@ "required":["FunctionName"], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

The Lambda function name.

You can specify a function name (for example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1427,7 +1436,7 @@ ], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

The Lambda function name. Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1587,6 +1596,18 @@ "ListFunctionsRequest":{ "type":"structure", "members":{ + "MasterRegion":{ + "shape":"MasterRegion", + "documentation":"

Optional string. If not specified, will return only regular function versions (i.e., non-replicated versions).

Valid values are:

The region from which the functions are replicated. For example, if you specify us-east-1, only functions replicated from that region will be returned.

ALL _ Will return all functions from any region. If specified, you also must specify a valid FunctionVersion parameter.

", + "location":"querystring", + "locationName":"MasterRegion" + }, + "FunctionVersion":{ + "shape":"FunctionVersion", + "documentation":"

Optional string. If not specified, only the unqualified functions ARNs (Amazon Resource Names) will be returned.

Valid value:

ALL _ Will return all versions, including $LATEST which will have fully qualified ARNs (Amazon Resource Names).

", + "location":"querystring", + "locationName":"FunctionVersion" + }, "Marker":{ "shape":"String", "documentation":"

Optional string. An opaque pagination token returned from a previous ListFunctions operation. If present, indicates where to continue the listing.

", @@ -1642,7 +1663,7 @@ "required":["FunctionName"], "members":{ "FunctionName":{ - "shape":"FunctionName", + "shape":"NamespacedFunctionName", "documentation":"

Function name whose versions to list. You can specify a function name (for example, Thumbnail) or you can specify Amazon Resource Name (ARN) of the function (for example, arn:aws:lambda:us-west-2:account-id:function:ThumbNail). AWS Lambda also allows you to specify a partial ARN (for example, account-id:Thumbnail). Note that the length constraint applies only to the ARN. If you specify only the function name, it is limited to 64 characters in length.

", "location":"uri", "locationName":"FunctionName" @@ -1684,6 +1705,10 @@ ] }, "Long":{"type":"long"}, + "MasterRegion":{ + "type":"string", + "pattern":"ALL|[a-z]{2}(-gov)?-[a-z]+-\\d{1}" + }, "MaxListItems":{ "type":"integer", "max":10000, @@ -1694,6 +1719,22 @@ "max":1536, "min":128 }, + "NameSpacedFunctionArn":{ + "type":"string", + "pattern":"arn:aws:lambda:[a-z]{2}-[a-z]+-\\d{1}:\\d{12}:function:[a-zA-Z0-9-_\\.]+(:(\\$LATEST|[a-zA-Z0-9-_]+))?" + }, + "NamespacedFunctionName":{ + "type":"string", + "max":170, + "min":1, + "pattern":"(arn:aws:lambda:)?([a-z]{2}-[a-z]+-\\d{1}:)?(\\d{12}:)?(function:)?([a-zA-Z0-9-_\\.]+)(:(\\$LATEST|[a-zA-Z0-9-_]+))?" + }, + "NamespacedStatementId":{ + "type":"string", + "max":100, + "min":1, + "pattern":"([a-zA-Z0-9-_.]+)" + }, "PolicyLengthExceededException":{ "type":"structure", "members":{ @@ -1749,7 +1790,7 @@ "locationName":"FunctionName" }, "StatementId":{ - "shape":"StatementId", + "shape":"NamespacedStatementId", "documentation":"

Statement ID of the permission to remove.

", "location":"uri", "locationName":"StatementId" @@ -2129,7 +2170,7 @@ }, "Runtime":{ "shape":"Runtime", - "documentation":"

The runtime environment for the Lambda function.

To use the Python runtime v3.6, set the value to \"python3.6\". To use the Python runtime v2.7, set the value to \"python2.7\". To use the Node.js runtime v6.10, set the value to \"nodejs6.10\". To use the Node.js runtime v4.3, set the value to \"nodejs4.3\". To use the Python runtime v3.6, set the value to \"python3.6\". To use the Python runtime v2.7, set the value to \"python2.7\".

Node v0.10.42 is currently marked as deprecated. You must migrate existing functions to the newer Node.js runtime versions available on AWS Lambda (nodejs4.3 or nodejs6.10) as soon as possible. You can request a one-time extension until June 30, 2017 by going to the Lambda console and following the instructions provided. Failure to do so will result in an invalid parameter value error being returned. Note that you will have to follow this procedure for each region that contains functions written in the Node v0.10.42 runtime.

" + "documentation":"

The runtime environment for the Lambda function.

To use the Python runtime v3.6, set the value to \"python3.6\". To use the Python runtime v2.7, set the value to \"python2.7\". To use the Node.js runtime v6.10, set the value to \"nodejs6.10\". To use the Node.js runtime v4.3, set the value to \"nodejs4.3\". To use the Python runtime v3.6, set the value to \"python3.6\".

Node v0.10.42 is currently marked as deprecated. You must migrate existing functions to the newer Node.js runtime versions available on AWS Lambda (nodejs4.3 or nodejs6.10) as soon as possible. You can request a one-time extension until June 30, 2017 by going to the Lambda console and following the instructions provided. Failure to do so will result in an invalid parameter error being returned. Note that you will have to follow this procedure for each region that contains functions written in the Node v0.10.42 runtime.

" }, "DeadLetterConfig":{ "shape":"DeadLetterConfig", diff --git a/services/lex/src/main/resources/codegen-resources/runtime/service-2.json b/services/lex/src/main/resources/codegen-resources/runtime/service-2.json index d211b9f8d9f1..ef4884e71975 100644 --- a/services/lex/src/main/resources/codegen-resources/runtime/service-2.json +++ b/services/lex/src/main/resources/codegen-resources/runtime/service-2.json @@ -32,7 +32,7 @@ {"shape":"BadGatewayException"}, {"shape":"LoopDetectedException"} ], - "documentation":"

Sends user input (text or speech) to Amazon Lex. Clients use this API to send requests to Amazon Lex at runtime. Amazon Lex interprets the user input using the machine learning model that it built for the bot.

In response, Amazon Lex returns the next message to convey to the user. Consider the following example messages:

Not all Amazon Lex messages require a response from the user. For example, conclusion statements do not require a response. Some messages require only a yes or no response. In addition to the message, Amazon Lex provides additional context about the message in the response that you can use to enhance client behavior, such as displaying the appropriate client user interface. Consider the following examples:

In addition, Amazon Lex also returns your application-specific sessionAttributes. For more information, see Managing Conversation Context.

", + "documentation":"

Sends user input (text or speech) to Amazon Lex. Clients use this API to send text and audio requests to Amazon Lex at runtime. Amazon Lex interprets the user input using the machine learning model that it built for the bot.

The PostContent operation supports audio input at 8kHz and 16kHz. You can use 8kHz audio to achieve higher speech recognition accuracy in telephone audio applications.

In response, Amazon Lex returns the next message to convey to the user. Consider the following example messages:

Not all Amazon Lex messages require a response from the user. For example, conclusion statements do not require a response. Some messages require only a yes or no response. In addition to the message, Amazon Lex provides additional context about the message in the response that you can use to enhance client behavior, such as displaying the appropriate client user interface. Consider the following examples:

In addition, Amazon Lex also returns your application-specific sessionAttributes. For more information, see Managing Conversation Context.

", "authtype":"v4-unsigned-body" }, "PostText":{ @@ -58,6 +58,10 @@ }, "shapes":{ "Accept":{"type":"string"}, + "AttributesString":{ + "type":"string", + "sensitive":true + }, "BadGatewayException":{ "type":"structure", "members":{ @@ -72,7 +76,7 @@ "members":{ "message":{"shape":"String"} }, - "documentation":"

Request validation failed, there is no usable message in the context, or the bot build failed.

", + "documentation":"

Request validation failed, there is no usable message in the context, or the bot build failed, is still in progress, or contains unbuilt changes.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -128,7 +132,7 @@ "members":{ "Message":{"shape":"ErrorMessage"} }, - "documentation":"

One of the downstream dependencies, such as AWS Lambda or Amazon Polly, threw an exception. For example, if Amazon Lex does not have sufficient permissions to call a Lambda function, it results in Lambda throwing an exception.

", + "documentation":"

One of the dependencies, such as AWS Lambda or Amazon Polly, threw an exception. For example,

", "error":{"httpStatusCode":424}, "exception":true }, @@ -201,7 +205,7 @@ "members":{ "Message":{"shape":"ErrorMessage"} }, - "documentation":"

Lambda fulfilment function returned DelegateDialogAction to Amazon Lex without changing any slot values.

", + "documentation":"

This exception is not used.

", "error":{"httpStatusCode":508}, "exception":true }, @@ -247,20 +251,27 @@ }, "userId":{ "shape":"UserId", - "documentation":"

ID of the client application user. Typically, each of your application users should have a unique ID. The application developer decides the user IDs. At runtime, each request must include the user ID. Note the following considerations:

", + "documentation":"

The ID of the client application user. Amazon Lex uses this to identify a user's conversation with your bot. At runtime, each request must contain the userID field.

To decide the user ID to use for your application, consider the following factors.

", "location":"uri", "locationName":"userId" }, "sessionAttributes":{ - "shape":"String", - "documentation":"

You pass this value in the x-amz-lex-session-attributes HTTP header. The value must be map (keys and values must be strings) that is JSON serialized and then base64 encoded.

A session represents dialog between a user and Amazon Lex. At runtime, a client application can pass contextual information, in the request to Amazon Lex. For example,

Amazon Lex passes these session attributes to the Lambda functions configured for the intent In the your Lambda function, you can use the session attributes for initialization and customization (prompts). Some examples are:

Amazon Lex does not persist session attributes.

If you configured a code hook for the intent, Amazon Lex passes the incoming session attributes to the Lambda function. The Lambda function must return these session attributes if you want Amazon Lex to return them to the client.

If there is no code hook configured for the intent Amazon Lex simply returns the session attributes to the client application.

", + "shape":"AttributesString", + "documentation":"

You pass this value as the x-amz-lex-session-attributes HTTP header.

Application-specific information passed between Amazon Lex and a client application. The value must be a JSON serialized and base64 encoded map with string keys and values. The total size of the sessionAttributes and requestAttributes headers is limited to 12 KB.

For more information, see Setting Session Attributes.

", "jsonvalue":true, "location":"header", "locationName":"x-amz-lex-session-attributes" }, + "requestAttributes":{ + "shape":"AttributesString", + "documentation":"

You pass this value as the x-amz-lex-request-attributes HTTP header.

Request-specific information passed between Amazon Lex and a client application. The value must be a JSON serialized and base64 encoded map with string keys and values. The total size of the requestAttributes and sessionAttributes headers is limited to 12 KB.

The namespace x-amz-lex: is reserved for special attributes. Don't create any request attributes with the prefix x-amz-lex:.

For more information, see Setting Request Attributes.

", + "jsonvalue":true, + "location":"header", + "locationName":"x-amz-lex-request-attributes" + }, "contentType":{ "shape":"HttpContentType", - "documentation":"

You pass this values as the Content-Type HTTP header.

Indicates the audio format or text. The header value must start with one of the following prefixes:

", + "documentation":"

You pass this value as the Content-Type HTTP header.

Indicates the audio format or text. The header value must start with one of the following prefixes:

", "location":"header", "locationName":"Content-Type" }, @@ -272,7 +283,7 @@ }, "inputStream":{ "shape":"BlobStream", - "documentation":"

User input in PCM or Opus audio format or text format as described in the Content-Type HTTP header.

" + "documentation":"

User input in PCM or Opus audio format or text format as described in the Content-Type HTTP header.

You can stream audio data to Amazon Lex or you can create a local buffer that captures all of the audio data before sending. In general, you get better performance if you stream audio data rather than buffering the data locally.

" } }, "payload":"inputStream" @@ -294,7 +305,7 @@ }, "slots":{ "shape":"String", - "documentation":"

Map of zero or more intent slots (name/value pairs) Amazon Lex detected from the user input during the conversation.

", + "documentation":"

Map of zero or more intent slots (name/value pairs) Amazon Lex detected from the user input during the conversation.

Amazon Lex creates a resolution list containing likely values for a slot. The value that it returns is determined by the valueSelectionStrategy selected when the slot type was created or updated. If valueSelectionStrategy is set to ORIGINAL_VALUE, the value provided by the user is returned, if the user value is similar to the slot values. If valueSelectionStrategy is set to TOP_RESOLUTION Amazon Lex returns the first value in the resolution list or, if there is no resolution list, null. If you don't specify a valueSelectionStrategy, the default is ORIGINAL_VALUE.

", "jsonvalue":true, "location":"header", "locationName":"x-amz-lex-slots" @@ -314,7 +325,7 @@ }, "dialogState":{ "shape":"DialogState", - "documentation":"

Identifies the current state of the user interaction. Amazon Lex returns one of the following values as dialogState. The client can optionally use this information to customize the user interface.

", + "documentation":"

Identifies the current state of the user interaction. Amazon Lex returns one of the following values as dialogState. The client can optionally use this information to customize the user interface.

", "location":"header", "locationName":"x-amz-lex-dialog-state" }, @@ -326,7 +337,7 @@ }, "inputTranscript":{ "shape":"String", - "documentation":"

Transcript of the voice input to the operation.

", + "documentation":"

The text used to process the request.

If the input was an audio stream, the inputTranscript field contains the text extracted from the audio stream. This is the text that is actually processed to recognize intents and slot values. You can use this information to determine if Amazon Lex is correctly processing the audio that you send.

", "location":"header", "locationName":"x-amz-lex-input-transcript" }, @@ -360,13 +371,17 @@ }, "userId":{ "shape":"UserId", - "documentation":"

The ID of the client application user. The application developer decides the user IDs. At runtime, each request must include the user ID. Typically, each of your application users should have a unique ID. Note the following considerations:

", + "documentation":"

The ID of the client application user. Amazon Lex uses this to identify a user's conversation with your bot. At runtime, each request must contain the userID field.

To decide the user ID to use for your application, consider the following factors.

", "location":"uri", "locationName":"userId" }, "sessionAttributes":{ "shape":"StringMap", - "documentation":"

By using session attributes, a client application can pass contextual information in the request to Amazon Lex For example,

Amazon Lex simply passes these session attributes to the Lambda functions configured for the intent.

In your Lambda function, you can also use the session attributes for initialization and customization (prompts and response cards). Some examples are:

Amazon Lex does not persist session attributes.

If you configure a code hook for the intent, Amazon Lex passes the incoming session attributes to the Lambda function. If you want Amazon Lex to return these session attributes back to the client, the Lambda function must return them.

If there is no code hook configured for the intent, Amazon Lex simply returns the session attributes back to the client application.

" + "documentation":"

Application-specific information passed between Amazon Lex and a client application.

For more information, see Setting Session Attributes.

" + }, + "requestAttributes":{ + "shape":"StringMap", + "documentation":"

Request-specific information passed between Amazon Lex and a client application.

The namespace x-amz-lex: is reserved for special attributes. Don't create any request attributes with the prefix x-amz-lex:.

For more information, see Setting Request Attributes.

" }, "inputText":{ "shape":"Text", @@ -383,7 +398,7 @@ }, "slots":{ "shape":"StringMap", - "documentation":"

The intent slots (name/value pairs) that Amazon Lex detected so far from the user input in the conversation.

" + "documentation":"

The intent slots that Amazon Lex detected from the user input in the conversation.

Amazon Lex creates a resolution list containing likely values for a slot. The value that it returns is determined by the valueSelectionStrategy selected when the slot type was created or updated. If valueSelectionStrategy is set to ORIGINAL_VALUE, the value provided by the user is returned, if the user value is similar to the slot values. If valueSelectionStrategy is set to TOP_RESOLUTION Amazon Lex returns the first value in the resolution list or, if there is no resolution list, null. If you don't specify a valueSelectionStrategy, the default is ORIGINAL_VALUE.

" }, "sessionAttributes":{ "shape":"StringMap", @@ -395,7 +410,7 @@ }, "dialogState":{ "shape":"DialogState", - "documentation":"

Identifies the current state of the user interaction. Amazon Lex returns one of the following values as dialogState. The client can optionally use this information to customize the user interface.

" + "documentation":"

Identifies the current state of the user interaction. Amazon Lex returns one of the following values as dialogState. The client can optionally use this information to customize the user interface.

" }, "slotToElicit":{ "shape":"String", @@ -438,7 +453,8 @@ "StringMap":{ "type":"map", "key":{"shape":"String"}, - "value":{"shape":"String"} + "value":{"shape":"String"}, + "sensitive":true }, "StringUrlWithLength":{ "type":"string", @@ -453,7 +469,8 @@ "Text":{ "type":"string", "max":1024, - "min":1 + "min":1, + "sensitive":true }, "UnsupportedMediaTypeException":{ "type":"structure", diff --git a/services/lexmodelbuilding/src/main/resources/codegen-resources/service-2.json b/services/lexmodelbuilding/src/main/resources/codegen-resources/service-2.json index d389475f58e1..b82cf40a4279 100644 --- a/services/lexmodelbuilding/src/main/resources/codegen-resources/service-2.json +++ b/services/lexmodelbuilding/src/main/resources/codegen-resources/service-2.json @@ -84,7 +84,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes all versions of the bot, including the $LATEST version. To delete a specific version of the bot, use the operation.

If a bot has an alias, you can't delete it. Instead, the DeleteBot operation returns a ResourceInUseException exception that includes a reference to the alias that refers to the bot. To remove the reference to the bot, delete the alias. If you get the same exception again, delete the referring alias until the DeleteBot operation is successful.

This operation requires permissions for the lex:DeleteBot action.

" + "documentation":"

Deletes all versions of the bot, including the $LATEST version. To delete a specific version of the bot, use the DeleteBotVersion operation.

If a bot has an alias, you can't delete it. Instead, the DeleteBot operation returns a ResourceInUseException exception that includes a reference to the alias that refers to the bot. To remove the reference to the bot, delete the alias. If you get the same exception again, delete the referring alias until the DeleteBot operation is successful.

This operation requires permissions for the lex:DeleteBot action.

" }, "DeleteBotAlias":{ "name":"DeleteBotAlias", @@ -137,7 +137,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes a specific version of a bot. To delete all versions of a bot, use the operation.

This operation requires permissions for the lex:DeleteBotVersion action.

" + "documentation":"

Deletes a specific version of a bot. To delete all versions of a bot, use the DeleteBot operation.

This operation requires permissions for the lex:DeleteBotVersion action.

" }, "DeleteIntent":{ "name":"DeleteIntent", @@ -155,7 +155,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes all versions of the intent, including the $LATEST version. To delete a specific version of the intent, use the operation.

You can delete a version of an intent only if it is not referenced. To delete an intent that is referred to in one or more bots (see how-it-works), you must remove those references first.

If you get the ResourceInUseException exception, it provides an example reference that shows where the intent is referenced. To remove the reference to the intent, either update the bot or delete it. If you get the same exception when you attempt to delete the intent again, repeat until the intent has no references and the call to DeleteIntent is successful.

This operation requires permission for the lex:DeleteIntent action.

" + "documentation":"

Deletes all versions of the intent, including the $LATEST version. To delete a specific version of the intent, use the DeleteIntentVersion operation.

You can delete a version of an intent only if it is not referenced. To delete an intent that is referred to in one or more bots (see how-it-works), you must remove those references first.

If you get the ResourceInUseException exception, it provides an example reference that shows where the intent is referenced. To remove the reference to the intent, either update the bot or delete it. If you get the same exception when you attempt to delete the intent again, repeat until the intent has no references and the call to DeleteIntent is successful.

This operation requires permission for the lex:DeleteIntent action.

" }, "DeleteIntentVersion":{ "name":"DeleteIntentVersion", @@ -173,7 +173,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes a specific version of an intent. To delete all versions of a intent, use the operation.

This operation requires permissions for the lex:DeleteIntentVersion action.

" + "documentation":"

Deletes a specific version of an intent. To delete all versions of a intent, use the DeleteIntent operation.

This operation requires permissions for the lex:DeleteIntentVersion action.

" }, "DeleteSlotType":{ "name":"DeleteSlotType", @@ -191,7 +191,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes all versions of the slot type, including the $LATEST version. To delete a specific version of the slot type, use the operation.

You can delete a version of a slot type only if it is not referenced. To delete a slot type that is referred to in one or more intents, you must remove those references first.

If you get the ResourceInUseException exception, the exception provides an example reference that shows the intent where the slot type is referenced. To remove the reference to the slot type, either update the intent or delete it. If you get the same exception when you attempt to delete the slot type again, repeat until the slot type has no references and the DeleteSlotType call is successful.

This operation requires permission for the lex:DeleteSlotType action.

" + "documentation":"

Deletes all versions of the slot type, including the $LATEST version. To delete a specific version of the slot type, use the DeleteSlotTypeVersion operation.

You can delete a version of a slot type only if it is not referenced. To delete a slot type that is referred to in one or more intents, you must remove those references first.

If you get the ResourceInUseException exception, the exception provides an example reference that shows the intent where the slot type is referenced. To remove the reference to the slot type, either update the intent or delete it. If you get the same exception when you attempt to delete the slot type again, repeat until the slot type has no references and the DeleteSlotType call is successful.

This operation requires permission for the lex:DeleteSlotType action.

" }, "DeleteSlotTypeVersion":{ "name":"DeleteSlotTypeVersion", @@ -209,7 +209,7 @@ {"shape":"BadRequestException"}, {"shape":"ResourceInUseException"} ], - "documentation":"

Deletes a specific version of a slot type. To delete all versions of a slot type, use the operation.

This operation requires permissions for the lex:DeleteSlotTypeVersion action.

" + "documentation":"

Deletes a specific version of a slot type. To delete all versions of a slot type, use the DeleteSlotType operation.

This operation requires permissions for the lex:DeleteSlotTypeVersion action.

" }, "DeleteUtterances":{ "name":"DeleteUtterances", @@ -225,7 +225,7 @@ {"shape":"InternalFailureException"}, {"shape":"BadRequestException"} ], - "documentation":"

Deletes stored utterances.

Amazon Lex stores the utterances that users send to your bot unless the childDirected field in the bot is set to true. Utterances are stored for 15 days for use with the operation, and then stored indefinately for use in improving the ability of your bot to respond to user input.

Use the DeleteStoredUtterances operation to manually delete stored utterances for a specific user.

This operation requires permissions for the lex:DeleteUtterances action.

" + "documentation":"

Deletes stored utterances.

Amazon Lex stores the utterances that users send to your bot unless the childDirected field in the bot is set to true. Utterances are stored for 15 days for use with the GetUtterancesView operation, and then stored indefinately for use in improving the ability of your bot to respond to user input.

Use the DeleteStoredUtterances operation to manually delete stored utterances for a specific user.

This operation requires permissions for the lex:DeleteUtterances action.

" }, "GetBot":{ "name":"GetBot", @@ -242,7 +242,7 @@ {"shape":"InternalFailureException"}, {"shape":"BadRequestException"} ], - "documentation":"

Returns metadata information for a specific bot. You must provide the bot name and the bot version or alias.

The GetBot operation requires permissions for the lex:GetBot action.

" + "documentation":"

Returns metadata information for a specific bot. You must provide the bot name and the bot version or alias.

This operation requires permissions for the lex:GetBot action.

" }, "GetBotAlias":{ "name":"GetBotAlias", @@ -393,6 +393,23 @@ ], "documentation":"

Gets a list of built-in slot types that meet the specified criteria.

For a list of built-in slot types, see Slot Type Reference in the Alexa Skills Kit.

This operation requires permission for the lex:GetBuiltInSlotTypes action.

" }, + "GetExport":{ + "name":"GetExport", + "http":{ + "method":"GET", + "requestUri":"/exports/", + "responseCode":200 + }, + "input":{"shape":"GetExportRequest"}, + "output":{"shape":"GetExportResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"BadRequestException"} + ], + "documentation":"

Exports the contents of a Amazon Lex resource in a specified format.

" + }, "GetIntent":{ "name":"GetIntent", "http":{ @@ -509,7 +526,7 @@ {"shape":"InternalFailureException"}, {"shape":"BadRequestException"} ], - "documentation":"

Use the GetUtterancesView operation to get information about the utterances that your users have made to your bot. You can use this list to tune the utterances that your bot responds to.

For example, say that you have created a bot to order flowers. After your users have used your bot for a while, use the GetUtterancesView operation to see the requests that they have made and whether they have been successful. You might find that the utterance \"I want flowers\" is not being recognized. You could add this utterance to the OrderFlowers intent so that your bot recognizes that utterance.

After you publish a new version of a bot, you can get information about the old version and the new so that you can compare the performance across the two versions.

Data is available for the last 15 days. You can request information for up to 5 versions in each request. The response contains information about a maximum of 100 utterances for each version.

If the bot's childDirected field is set to true, utterances for the bot are not stored and cannot be retrieved with the GetUtterancesView operation. For more information, see .

This operation requires permissions for the lex:GetUtterancesView action.

" + "documentation":"

Use the GetUtterancesView operation to get information about the utterances that your users have made to your bot. You can use this list to tune the utterances that your bot responds to.

For example, say that you have created a bot to order flowers. After your users have used your bot for a while, use the GetUtterancesView operation to see the requests that they have made and whether they have been successful. You might find that the utterance \"I want flowers\" is not being recognized. You could add this utterance to the OrderFlowers intent so that your bot recognizes that utterance.

After you publish a new version of a bot, you can get information about the old version and the new so that you can compare the performance across the two versions.

Data is available for the last 15 days. You can request information for up to 5 versions in each request. The response contains information about a maximum of 100 utterances for each version.

If the bot's childDirected field is set to true, utterances for the bot are not stored and cannot be retrieved with the GetUtterancesView operation. For more information, see PutBot.

This operation requires permissions for the lex:GetUtterancesView action.

" }, "PutBot":{ "name":"PutBot", @@ -527,7 +544,7 @@ {"shape":"BadRequestException"}, {"shape":"PreconditionFailedException"} ], - "documentation":"

Creates an Amazon Lex conversational bot or replaces an existing bot. When you create or update a bot you only required to specify a name. You can use this to add intents later, or to remove intents from an existing bot. When you create a bot with a name only, the bot is created or updated but Amazon Lex returns the response FAILED. You can build the bot after you add one or more intents. For more information about Amazon Lex bots, see how-it-works.

If you specify the name of an existing bot, the fields in the request replace the existing values in the $LATEST version of the bot. Amazon Lex removes any fields that you don't provide values for in the request, except for the idleTTLInSeconds and privacySettings fields, which are set to their default values. If you don't specify values for required fields, Amazon Lex throws an exception.

This operation requires permissions for the lex:PutBot action. For more information, see auth-and-access-control.

" + "documentation":"

Creates an Amazon Lex conversational bot or replaces an existing bot. When you create or update a bot you are only required to specify a name. You can use this to add intents later, or to remove intents from an existing bot. When you create a bot with a name only, the bot is created or updated but Amazon Lex returns the response FAILED. You can build the bot after you add one or more intents. For more information about Amazon Lex bots, see how-it-works.

If you specify the name of an existing bot, the fields in the request replace the existing values in the $LATEST version of the bot. Amazon Lex removes any fields that you don't provide values for in the request, except for the idleTTLInSeconds and privacySettings fields, which are set to their default values. If you don't specify values for required fields, Amazon Lex throws an exception.

This operation requires permissions for the lex:PutBot action. For more information, see auth-and-access-control.

" }, "PutBotAlias":{ "name":"PutBotAlias", @@ -792,7 +809,8 @@ "key":{"shape":"String"}, "value":{"shape":"String"}, "max":10, - "min":1 + "min":1, + "sensitive":true }, "ChannelType":{ "type":"string", @@ -871,15 +889,15 @@ }, "intents":{ "shape":"IntentList", - "documentation":"

An array of Intent objects. For more information, see .

" + "documentation":"

An array of Intent objects. For more information, see PutBot.

" }, "clarificationPrompt":{ "shape":"Prompt", - "documentation":"

The message that Amazon Lex uses when it doesn't understand the user's request. For more information, see .

" + "documentation":"

The message that Amazon Lex uses when it doesn't understand the user's request. For more information, see PutBot.

" }, "abortStatement":{ "shape":"Statement", - "documentation":"

The message that Amazon Lex uses to abort a conversation. For more information, see .

" + "documentation":"

The message that Amazon Lex uses to abort a conversation. For more information, see PutBot.

" }, "status":{ "shape":"Status", @@ -899,7 +917,7 @@ }, "idleSessionTTLInSeconds":{ "shape":"SessionTTL", - "documentation":"

The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see .

" + "documentation":"

The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot.

" }, "voiceId":{ "shape":"String", @@ -1050,6 +1068,10 @@ "checksum":{ "shape":"String", "documentation":"

Checksum of the $LATEST version of the slot type.

" + }, + "valueSelectionStrategy":{ + "shape":"SlotValueSelectionStrategy", + "documentation":"

The strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType.

" } } }, @@ -1135,7 +1157,7 @@ }, "version":{ "shape":"NumericalVersion", - "documentation":"

The version of the bot to delete. You cannot delete the $LATEST version of the bot. To delete the $LATEST version, use the operation.

", + "documentation":"

The version of the bot to delete. You cannot delete the $LATEST version of the bot. To delete the $LATEST version, use the DeleteBot operation.

", "location":"uri", "locationName":"version" } @@ -1168,7 +1190,7 @@ }, "version":{ "shape":"NumericalVersion", - "documentation":"

The version of the intent to delete. You cannot delete the $LATEST version of the intent. To delete the $LATEST version, use the operation.

", + "documentation":"

The version of the intent to delete. You cannot delete the $LATEST version of the intent. To delete the $LATEST version, use the DeleteIntent operation.

", "location":"uri", "locationName":"version" } @@ -1201,7 +1223,7 @@ }, "version":{ "shape":"NumericalVersion", - "documentation":"

The version of the slot type to delete. You cannot delete the $LATEST version of the slot type. To delete the $LATEST version, use the operation.

", + "documentation":"

The version of the slot type to delete. You cannot delete the $LATEST version of the slot type. To delete the $LATEST version, use the DeleteSlotType operation.

", "location":"uri", "locationName":"version" } @@ -1222,7 +1244,7 @@ }, "userId":{ "shape":"UserId", - "documentation":"

The unique identifier for the user that made the utterances. This is the user ID that was sent in the or operation request that contained the utterance.

", + "documentation":"

The unique identifier for the user that made the utterances. This is the user ID that was sent in the PostContent or PostText operation request that contained the utterance.

", "location":"uri", "locationName":"userId" } @@ -1240,6 +1262,10 @@ "value":{ "shape":"Value", "documentation":"

The value of the slot type.

" + }, + "synonyms":{ + "shape":"SynonymList", + "documentation":"

Additional values related to the slot type value.

" } }, "documentation":"

Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.

For example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values

" @@ -1250,6 +1276,18 @@ "max":10000, "min":1 }, + "ExportStatus":{ + "type":"string", + "enum":[ + "IN_PROGRESS", + "READY", + "FAILED" + ] + }, + "ExportType":{ + "type":"string", + "enum":["ALEXA_SKILLS_KIT"] + }, "FollowUpPrompt":{ "type":"structure", "required":[ @@ -1259,14 +1297,14 @@ "members":{ "prompt":{ "shape":"Prompt", - "documentation":"

Obtains information from the user.

" + "documentation":"

Prompts for information from the user.

" }, "rejectionStatement":{ "shape":"Statement", - "documentation":"

If the user answers \"no\" to the question defined in confirmationPrompt, Amazon Lex responds with this statement to acknowledge that the intent was canceled.

" + "documentation":"

If the user answers \"no\" to the question defined in the prompt field, Amazon Lex responds with this statement to acknowledge that the intent was canceled.

" } }, - "documentation":"

After an intent is fulfilled, you might prompt the user for additional activity. For example, after the OrderPizza intent is fulfilled (the pizza order is placed with a pizzeria), you might prompt the user to find out whether the user wants to order drinks (another intent you defined in your bot).

" + "documentation":"

A prompt for additional activity after an intent is fulfilled. For example, after the OrderPizza intent is fulfilled, you might prompt the user to find out whether the user wants to order drinks.

" }, "FulfillmentActivity":{ "type":"structure", @@ -1534,15 +1572,15 @@ }, "intents":{ "shape":"IntentList", - "documentation":"

An array of intent objects. For more information, see .

" + "documentation":"

An array of intent objects. For more information, see PutBot.

" }, "clarificationPrompt":{ "shape":"Prompt", - "documentation":"

The message Amazon Lex uses when it doesn't understand the user's request. For more information, see .

" + "documentation":"

The message Amazon Lex uses when it doesn't understand the user's request. For more information, see PutBot.

" }, "abortStatement":{ "shape":"Statement", - "documentation":"

The message that Amazon Lex returns when the user elects to end the conversation without completing it. For more information, see .

" + "documentation":"

The message that Amazon Lex returns when the user elects to end the conversation without completing it. For more information, see PutBot.

" }, "status":{ "shape":"Status", @@ -1562,11 +1600,11 @@ }, "idleSessionTTLInSeconds":{ "shape":"SessionTTL", - "documentation":"

The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see .

" + "documentation":"

The maximum time in seconds that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot.

" }, "voiceId":{ "shape":"String", - "documentation":"

The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see .

" + "documentation":"

The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see PutBot.

" }, "checksum":{ "shape":"String", @@ -1772,6 +1810,74 @@ } } }, + "GetExportRequest":{ + "type":"structure", + "required":[ + "name", + "version", + "resourceType", + "exportType" + ], + "members":{ + "name":{ + "shape":"Name", + "documentation":"

The name of the bot to export.

", + "location":"querystring", + "locationName":"name" + }, + "version":{ + "shape":"NumericalVersion", + "documentation":"

The version of the bot to export.

", + "location":"querystring", + "locationName":"version" + }, + "resourceType":{ + "shape":"ResourceType", + "documentation":"

The type of resource to export.

", + "location":"querystring", + "locationName":"resourceType" + }, + "exportType":{ + "shape":"ExportType", + "documentation":"

The format of the exported data.

", + "location":"querystring", + "locationName":"exportType" + } + } + }, + "GetExportResponse":{ + "type":"structure", + "members":{ + "name":{ + "shape":"Name", + "documentation":"

The name of the bot being exported.

" + }, + "version":{ + "shape":"NumericalVersion", + "documentation":"

The version of the bot being exported.

" + }, + "resourceType":{ + "shape":"ResourceType", + "documentation":"

The type of the exported resource.

" + }, + "exportType":{ + "shape":"ExportType", + "documentation":"

The format of the exported data.

" + }, + "exportStatus":{ + "shape":"ExportStatus", + "documentation":"

The status of the export.

" + }, + "failureReason":{ + "shape":"String", + "documentation":"

If status is FAILED, Amazon Lex provides the reason that it failed to export the resource.

" + }, + "url":{ + "shape":"String", + "documentation":"

An S3 pre-signed URL that provides the location of the exported resource. The exported resource is a ZIP archive that contains the exported resource in JSON format. The structure of the archive may change. Your code should not rely on the archive structure.

" + } + } + }, "GetIntentRequest":{ "type":"structure", "required":[ @@ -1814,7 +1920,7 @@ }, "confirmationPrompt":{ "shape":"Prompt", - "documentation":"

If defined in the bot, Amazon Lex uses prompt to confirm the intent before fulfilling the user's request. For more information, see .

" + "documentation":"

If defined in the bot, Amazon Lex uses prompt to confirm the intent before fulfilling the user's request. For more information, see PutIntent.

" }, "rejectionStatement":{ "shape":"Statement", @@ -1822,7 +1928,7 @@ }, "followUpPrompt":{ "shape":"FollowUpPrompt", - "documentation":"

If defined in the bot, Amazon Lex uses this prompt to solicit additional user activity after the intent is fulfilled. For more information, see .

" + "documentation":"

If defined in the bot, Amazon Lex uses this prompt to solicit additional user activity after the intent is fulfilled. For more information, see PutIntent.

" }, "conclusionStatement":{ "shape":"Statement", @@ -1830,11 +1936,11 @@ }, "dialogCodeHook":{ "shape":"CodeHook", - "documentation":"

If defined in the bot, Amazon Amazon Lex invokes this Lambda function for each user input. For more information, see .

" + "documentation":"

If defined in the bot, Amazon Amazon Lex invokes this Lambda function for each user input. For more information, see PutIntent.

" }, "fulfillmentActivity":{ "shape":"FulfillmentActivity", - "documentation":"

Describes how the intent is fulfilled. For more information, see .

" + "documentation":"

Describes how the intent is fulfilled. For more information, see PutIntent.

" }, "parentIntentSignature":{ "shape":"BuiltinIntentSignature", @@ -1923,7 +2029,7 @@ "members":{ "intents":{ "shape":"IntentMetadataList", - "documentation":"

An array of Intent objects. For more information, see .

" + "documentation":"

An array of Intent objects. For more information, see PutBot.

" }, "nextToken":{ "shape":"NextToken", @@ -1982,6 +2088,10 @@ "checksum":{ "shape":"String", "documentation":"

Checksum of the $LATEST version of the slot type.

" + }, + "valueSelectionStrategy":{ + "shape":"SlotValueSelectionStrategy", + "documentation":"

The strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType.

" } } }, @@ -2095,7 +2205,7 @@ }, "utterances":{ "shape":"ListsOfUtterances", - "documentation":"

An array of objects, each containing a list of objects describing the utterances that were processed by your bot. The response contains a maximum of 100 UtteranceData objects for each version.

" + "documentation":"

An array of UtteranceList objects, each containing a list of UtteranceData objects describing the utterances that were processed by your bot. The response contains a maximum of 100 UtteranceData objects for each version.

" } } }, @@ -2119,9 +2229,7 @@ }, "IntentList":{ "type":"list", - "member":{"shape":"Intent"}, - "max":100, - "min":1 + "member":{"shape":"Intent"} }, "IntentMetadata":{ "type":"structure", @@ -2250,7 +2358,7 @@ "type":"string", "max":64, "min":1, - "pattern":"[a-zA-Z]+" + "pattern":"[a-zA-Z_]+" }, "NextToken":{"type":"string"}, "NotFoundException":{ @@ -2407,7 +2515,7 @@ }, "clarificationPrompt":{ "shape":"Prompt", - "documentation":"

When Amazon Lex doesn't understand the user's intent, it uses one of these messages to get clarification. For example, \"Sorry, I didn't understand. Please repeat.\" Amazon Lex repeats the clarification prompt the number of times specified in maxAttempts. If Amazon Lex still can't understand, it sends the message specified in abortStatement.

" + "documentation":"

When Amazon Lex doesn't understand the user's intent, it uses this message to get clarification. To specify how many times Amazon Lex should repeate the clarification prompt, use the maxAttempts field. If Amazon Lex still doesn't understand, it sends the message in the abortStatement field.

When you create a clarification prompt, make sure that it suggests the correct response from the user. for example, for a bot that orders pizza and drinks, you might create this clarification prompt: \"What would you like to do? You can say 'Order a pizza' or 'Order a drink.'\"

" }, "abortStatement":{ "shape":"Statement", @@ -2419,7 +2527,7 @@ }, "voiceId":{ "shape":"String", - "documentation":"

The Amazon Polly voice ID that you want Amazon Lex to use for voice interactions with the user. The locale configured for the voice must match the locale of the bot. For more information, see Voice in the Amazon Polly Developer Guide.

" + "documentation":"

The Amazon Polly voice ID that you want Amazon Lex to use for voice interactions with the user. The locale configured for the voice must match the locale of the bot. For more information, see Available Voices in the Amazon Polly Developer Guide.

" }, "checksum":{ "shape":"String", @@ -2452,15 +2560,15 @@ }, "intents":{ "shape":"IntentList", - "documentation":"

An array of Intent objects. For more information, see .

" + "documentation":"

An array of Intent objects. For more information, see PutBot.

" }, "clarificationPrompt":{ "shape":"Prompt", - "documentation":"

The prompts that Amazon Lex uses when it doesn't understand the user's intent. For more information, see .

" + "documentation":"

The prompts that Amazon Lex uses when it doesn't understand the user's intent. For more information, see PutBot.

" }, "abortStatement":{ "shape":"Statement", - "documentation":"

The message that Amazon Lex uses to abort a conversation. For more information, see .

" + "documentation":"

The message that Amazon Lex uses to abort a conversation. For more information, see PutBot.

" }, "status":{ "shape":"Status", @@ -2480,11 +2588,11 @@ }, "idleSessionTTLInSeconds":{ "shape":"SessionTTL", - "documentation":"

The maximum length of time that Amazon Lex retains the data gathered in a conversation. For more information, see .

" + "documentation":"

The maximum length of time that Amazon Lex retains the data gathered in a conversation. For more information, see PutBot.

" }, "voiceId":{ "shape":"String", - "documentation":"

The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see .

" + "documentation":"

The Amazon Polly voice ID that Amazon Lex uses for voice interaction with the user. For more information, see PutBot.

" }, "checksum":{ "shape":"String", @@ -2520,7 +2628,7 @@ }, "slots":{ "shape":"SlotList", - "documentation":"

An array of intent slots. At runtime, Amazon Lex elicits required slot values from the user using prompts defined in the slots. For more information, see <xref linkend=\"how-it-works\"/>.

" + "documentation":"

An array of intent slots. At runtime, Amazon Lex elicits required slot values from the user using prompts defined in the slots. For more information, see how-it-works.

" }, "sampleUtterances":{ "shape":"IntentUtteranceList", @@ -2536,7 +2644,7 @@ }, "followUpPrompt":{ "shape":"FollowUpPrompt", - "documentation":"

A user prompt for additional activity after an intent is fulfilled. For example, after the OrderPizza intent is fulfilled (your Lambda function placed an order with a pizzeria), you might prompt the user to find if they want to order a drink (assuming that you have defined an OrderDrink intent in your bot).

The followUpPrompt and conclusionStatement are mutually exclusive. You can specify only one. For example, your bot may not solicit both the following:

Follow up prompt - \"$session.FirstName, your pizza order has been placed. Would you like to order a drink or a dessert?\"

Conclusion statement - \"$session.FirstName, your pizza order has been placed.\"

" + "documentation":"

Amazon Lex uses this prompt to solicit additional activity after fulfilling an intent. For example, after the OrderPizza intent is fulfilled, you might prompt the user to order a drink.

The action that Amazon Lex takes depends on the user's response, as follows:

The followUpPrompt field and the conclusionStatement field are mutually exclusive. You can specify only one.

" }, "conclusionStatement":{ "shape":"Statement", @@ -2548,7 +2656,7 @@ }, "fulfillmentActivity":{ "shape":"FulfillmentActivity", - "documentation":"

Describes how the intent is fulfilled. For example, after a user provides all of the information for a pizza order, fulfillmentActivity defines how the bot places an order with a local pizza store.

You might configure Amazon Lex to return all of the intent information to the client application, or direct it to invoke a Lambda function that can process the intent (for example, place an order with a pizzeria).

" + "documentation":"

Required. Describes how the intent is fulfilled. For example, after a user provides all of the information for a pizza order, fulfillmentActivity defines how the bot places an order with a local pizza store.

You might configure Amazon Lex to return all of the intent information to the client application, or direct it to invoke a Lambda function that can process the intent (for example, place an order with a pizzeria).

" }, "parentIntentSignature":{ "shape":"BuiltinIntentSignature", @@ -2641,11 +2749,15 @@ }, "enumerationValues":{ "shape":"EnumerationValues", - "documentation":"

A list of EnumerationValue objects that defines the values that the slot type can take.

" + "documentation":"

A list of EnumerationValue objects that defines the values that the slot type can take. Each value can have a list of synonyms, which are additional values that help train the machine learning model about the values that it resolves for a slot.

When Amazon Lex resolves a slot value, it generates a resolution list that contains up to five possible values for the slot. If you are using a Lambda function, this resolution list is passed to the function. If you are not using a Lambda function you can choose to return the value that the user entered or the first value in the resolution list as the slot value. The valueSelectionStrategy field indicates the option to use.

" }, "checksum":{ "shape":"String", "documentation":"

Identifies a specific revision of the $LATEST version.

When you create a new slot type, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.

When you want to update a slot type, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don't specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.

" + }, + "valueSelectionStrategy":{ + "shape":"SlotValueSelectionStrategy", + "documentation":"

Determines the slot resolution strategy that Amazon Lex uses to return slot type values. The field can be set to one of the following values:

If you don't specify the valueSelectionStrategy, the default is ORIGINAL_VALUE.

" } } }, @@ -2679,6 +2791,10 @@ "checksum":{ "shape":"String", "documentation":"

Checksum of the $LATEST version of the slot type.

" + }, + "valueSelectionStrategy":{ + "shape":"SlotValueSelectionStrategy", + "documentation":"

The slot resolution strategy that Amazon Lex uses to determine the value of the slot. For more information, see PutSlotType.

" } } }, @@ -2715,6 +2831,10 @@ }, "documentation":"

Describes the resource that refers to the resource that you are attempting to delete. This object is returned as part of the ResourceInUseException exception.

" }, + "ResourceType":{ + "type":"string", + "enum":["BOT"] + }, "ResponseCard":{ "type":"string", "max":50000, @@ -2832,6 +2952,13 @@ "max":10, "min":0 }, + "SlotValueSelectionStrategy":{ + "type":"string", + "enum":[ + "ORIGINAL_VALUE", + "TOP_RESOLUTION" + ] + }, "Statement":{ "type":"structure", "required":["messages"], @@ -2842,7 +2969,7 @@ }, "responseCard":{ "shape":"ResponseCard", - "documentation":"

At runtime, if the client is using the API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.

" + "documentation":"

At runtime, if the client is using the PostText API, Amazon Lex includes the response card in the response. It substitutes all of the session attributes and slot values for placeholders in the response card.

" } }, "documentation":"

A collection of messages that convey information to the user. At runtime, Amazon Lex selects the message to convey.

" @@ -2864,6 +2991,10 @@ ] }, "String":{"type":"string"}, + "SynonymList":{ + "type":"list", + "member":{"shape":"Value"} + }, "Timestamp":{"type":"timestamp"}, "UserId":{ "type":"string", @@ -2910,7 +3041,7 @@ }, "utterances":{ "shape":"ListOfUtterance", - "documentation":"

One or more objects that contain information about the utterances that have been made to a bot. The maximum number of object is 100.

" + "documentation":"

One or more UtteranceData objects that contain information about the utterances that have been made to a bot. The maximum number of object is 100.

" } }, "documentation":"

Provides a list of utterances that have been made to a specific version of your bot. The list contains a maximum of 100 utterances.

" diff --git a/services/lightsail/src/main/resources/codegen-resources/service-2.json b/services/lightsail/src/main/resources/codegen-resources/service-2.json index dc0512f20b6d..43ed950d9691 100644 --- a/services/lightsail/src/main/resources/codegen-resources/service-2.json +++ b/services/lightsail/src/main/resources/codegen-resources/service-2.json @@ -6,6 +6,7 @@ "jsonVersion":"1.1", "protocol":"json", "serviceFullName":"Amazon Lightsail", + "serviceId":"Lightsail", "signatureVersion":"v4", "targetPrefix":"Lightsail_20161128", "uid":"lightsail-2016-11-28" @@ -30,6 +31,25 @@ ], "documentation":"

Allocates a static IP address.

" }, + "AttachDisk":{ + "name":"AttachDisk", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AttachDiskRequest"}, + "output":{"shape":"AttachDiskResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Attaches a block storage disk to a running or stopped Lightsail instance and exposes it to the instance with the specified disk name.

" + }, "AttachStaticIp":{ "name":"AttachStaticIp", "http":{ @@ -68,6 +88,63 @@ ], "documentation":"

Closes the public ports on a specific Amazon Lightsail instance.

" }, + "CreateDisk":{ + "name":"CreateDisk", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDiskRequest"}, + "output":{"shape":"CreateDiskResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Creates a block storage disk that can be attached to a Lightsail instance in the same Availability Zone (e.g., us-east-2a). The disk is created in the regional endpoint that you send the HTTP request to. For more information, see Regions and Availability Zones in Lightsail.

" + }, + "CreateDiskFromSnapshot":{ + "name":"CreateDiskFromSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDiskFromSnapshotRequest"}, + "output":{"shape":"CreateDiskFromSnapshotResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Creates a block storage disk from a disk snapshot that can be attached to a Lightsail instance in the same Availability Zone (e.g., us-east-2a). The disk is created in the regional endpoint that you send the HTTP request to. For more information, see Regions and Availability Zones in Lightsail.

" + }, + "CreateDiskSnapshot":{ + "name":"CreateDiskSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDiskSnapshotRequest"}, + "output":{"shape":"CreateDiskSnapshotResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Creates a snapshot of a block storage disk. You can use snapshots for backups, to make copies of disks, and to save data before shutting down a Lightsail instance.

You can take a snapshot of an attached disk that is in use; however, snapshots only capture data that has been written to your disk at the time the snapshot command is issued. This may exclude any data that has been cached by any applications or the operating system. If you can pause any file systems on the disk long enough to take a snapshot, your snapshot should be complete. Nevertheless, if you cannot pause all file writes to the disk, you should unmount the disk from within the Lightsail instance, issue the create disk snapshot command, and then remount the disk to ensure a consistent and complete snapshot. You may remount and use your disk while the snapshot status is pending.

" + }, "CreateDomain":{ "name":"CreateDomain", "http":{ @@ -182,6 +259,44 @@ ], "documentation":"

Creates sn SSH key pair.

" }, + "DeleteDisk":{ + "name":"DeleteDisk", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDiskRequest"}, + "output":{"shape":"DeleteDiskResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Deletes the specified block storage disk. The disk must be in the available state (not attached to a Lightsail instance).

The disk may remain in the deleting state for several minutes.

" + }, + "DeleteDiskSnapshot":{ + "name":"DeleteDiskSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteDiskSnapshotRequest"}, + "output":{"shape":"DeleteDiskSnapshotResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Deletes the specified disk snapshot.

When you make periodic snapshots of a disk, the snapshots are incremental, and only the blocks on the device that have changed since your last snapshot are saved in the new snapshot. When you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will have access to all the information needed to restore the disk.

" + }, "DeleteDomain":{ "name":"DeleteDomain", "http":{ @@ -277,6 +392,25 @@ ], "documentation":"

Deletes a specific SSH key pair.

" }, + "DetachDisk":{ + "name":"DetachDisk", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DetachDiskRequest"}, + "output":{"shape":"DetachDiskResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Detaches a stopped block storage disk from a Lightsail instance. Make sure to unmount any file systems on the device within your operating system before stopping the instance and detaching the disk.

" + }, "DetachStaticIp":{ "name":"DetachStaticIp", "http":{ @@ -372,6 +506,82 @@ ], "documentation":"

Returns the list of bundles that are available for purchase. A bundle describes the specs for your virtual private server (or instance).

" }, + "GetDisk":{ + "name":"GetDisk", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDiskRequest"}, + "output":{"shape":"GetDiskResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Returns information about a specific block storage disk.

" + }, + "GetDiskSnapshot":{ + "name":"GetDiskSnapshot", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDiskSnapshotRequest"}, + "output":{"shape":"GetDiskSnapshotResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Returns information about a specific block storage disk snapshot.

" + }, + "GetDiskSnapshots":{ + "name":"GetDiskSnapshots", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDiskSnapshotsRequest"}, + "output":{"shape":"GetDiskSnapshotsResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Returns information about all block storage disk snapshots in your AWS account and region.

If you are describing a long list of disk snapshots, you can paginate the output to make the list more manageable. You can use the pageToken and nextPageToken values to retrieve the next items in the list.

" + }, + "GetDisks":{ + "name":"GetDisks", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDisksRequest"}, + "output":{"shape":"GetDisksResult"}, + "errors":[ + {"shape":"ServiceException"}, + {"shape":"InvalidInputException"}, + {"shape":"NotFoundException"}, + {"shape":"OperationFailureException"}, + {"shape":"AccessDeniedException"}, + {"shape":"AccountSetupInProgressException"}, + {"shape":"UnauthenticatedException"} + ], + "documentation":"

Returns information about all block storage disks in your AWS account and region.

If you are describing a long list of disks, you can paginate the output to make the list more manageable. You can use the pageToken and nextPageToken values to retrieve the next items in the list.

" + }, "GetDomain":{ "name":"GetDomain", "http":{ @@ -973,6 +1183,37 @@ } } }, + "AttachDiskRequest":{ + "type":"structure", + "required":[ + "diskName", + "instanceName", + "diskPath" + ], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique Lightsail disk name (e.g., my-disk).

" + }, + "instanceName":{ + "shape":"ResourceName", + "documentation":"

The name of the Lightsail instance where you want to utilize the storage disk.

" + }, + "diskPath":{ + "shape":"NonEmptyString", + "documentation":"

The disk path to expose to the instance (e.g., /dev/xvdf).

" + } + } + }, + "AttachDiskResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operation.

" + } + } + }, "AttachStaticIpRequest":{ "type":"structure", "required":[ @@ -999,12 +1240,17 @@ } } }, + "AttachedDiskMap":{ + "type":"map", + "key":{"shape":"ResourceName"}, + "value":{"shape":"DiskMapList"} + }, "AvailabilityZone":{ "type":"structure", "members":{ "zoneName":{ "shape":"NonEmptyString", - "documentation":"

The name of the Availability Zone. The format is us-east-1a (case-sensitive).

" + "documentation":"

The name of the Availability Zone. The format is us-east-2a (case-sensitive).

" }, "state":{ "shape":"NonEmptyString", @@ -1047,7 +1293,7 @@ }, "minPower":{ "shape":"integer", - "documentation":"

The minimum machine size required to run this blueprint. 0 indicates that the blueprint runs on all instances.

" + "documentation":"

The minimum bundle power required to run this blueprint. For example, you need a bundle with a power value of 500 or more to create an instance that uses a blueprint with a minimum power value of 500. 0 indicates that the blueprint runs on all instance sizes.

" }, "version":{ "shape":"string", @@ -1064,6 +1310,10 @@ "licenseUrl":{ "shape":"string", "documentation":"

The end-user license agreement URL for the image or blueprint.

" + }, + "platform":{ + "shape":"InstancePlatform", + "documentation":"

The operating system platform (either Linux/Unix-based or Windows Server-based) of the blueprint.

" } }, "documentation":"

Describes a blueprint (a virtual private server image).

" @@ -1112,7 +1362,7 @@ }, "power":{ "shape":"integer", - "documentation":"

The power of the bundle (e.g., 500).

" + "documentation":"

A numeric value that represents the power of the bundle (e.g., 500). You can use the bundle's power value in conjunction with a blueprint's minimum power value to determine whether the blueprint will run on the bundle. For example, you need a bundle with a power value of 500 or more to create an instance that uses a blueprint with a minimum power value of 500.

" }, "ramSizeInGb":{ "shape":"float", @@ -1121,6 +1371,10 @@ "transferPerMonthInGb":{ "shape":"integer", "documentation":"

The data transfer rate per month in GB (e.g., 2000).

" + }, + "supportedPlatforms":{ + "shape":"InstancePlatformList", + "documentation":"

The operating system platform (Linux/Unix-based or Windows Server-based) that the bundle supports. You can only launch a WINDOWS bundle on a blueprint that supports the WINDOWS platform. LINUX_UNIX blueprints require a LINUX_UNIX bundle.

" } }, "documentation":"

Describes a bundle, which is a set of specs describing your virtual private server (or instance).

" @@ -1155,6 +1409,99 @@ } } }, + "CreateDiskFromSnapshotRequest":{ + "type":"structure", + "required":[ + "diskName", + "diskSnapshotName", + "availabilityZone", + "sizeInGb" + ], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique Lightsail disk name (e.g., my-disk).

" + }, + "diskSnapshotName":{ + "shape":"ResourceName", + "documentation":"

The name of the disk snapshot (e.g., my-snapshot) from which to create the new storage disk.

" + }, + "availabilityZone":{ + "shape":"NonEmptyString", + "documentation":"

The Availability Zone where you want to create the disk (e.g., us-east-2a). Choose the same Availability Zone as the Lightsail instance where you want to create the disk.

Use the GetRegions operation to list the Availability Zones where Lightsail is currently available.

" + }, + "sizeInGb":{ + "shape":"integer", + "documentation":"

The size of the disk in GB (e.g., 32).

" + } + } + }, + "CreateDiskFromSnapshotResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, + "CreateDiskRequest":{ + "type":"structure", + "required":[ + "diskName", + "availabilityZone", + "sizeInGb" + ], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique Lightsail disk name (e.g., my-disk).

" + }, + "availabilityZone":{ + "shape":"NonEmptyString", + "documentation":"

The Availability Zone where you want to create the disk (e.g., us-east-2a). Choose the same Availability Zone as the Lightsail instance where you want to create the disk.

Use the GetRegions operation to list the Availability Zones where Lightsail is currently available.

" + }, + "sizeInGb":{ + "shape":"integer", + "documentation":"

The size of the disk in GB (e.g., 32).

" + } + } + }, + "CreateDiskResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, + "CreateDiskSnapshotRequest":{ + "type":"structure", + "required":[ + "diskName", + "diskSnapshotName" + ], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique name of the source disk (e.g., my-source-disk).

" + }, + "diskSnapshotName":{ + "shape":"ResourceName", + "documentation":"

The name of the destination disk snapshot (e.g., my-disk-snapshot) based on the source disk.

" + } + } + }, + "CreateDiskSnapshotResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, "CreateDomainEntryRequest":{ "type":"structure", "required":[ @@ -1239,9 +1586,13 @@ "shape":"StringList", "documentation":"

The names for your new instances.

" }, + "attachedDiskMapping":{ + "shape":"AttachedDiskMap", + "documentation":"

An object containing information about one or more disk mappings.

" + }, "availabilityZone":{ "shape":"string", - "documentation":"

The Availability Zone where you want to create your instances. Use the following formatting: us-east-1a (case sensitive). You can get a list of availability zones by using the get regions operation. Be sure to add the include availability zones parameter to your request.

" + "documentation":"

The Availability Zone where you want to create your instances. Use the following formatting: us-east-2a (case sensitive). You can get a list of availability zones by using the get regions operation. Be sure to add the include availability zones parameter to your request.

" }, "instanceSnapshotName":{ "shape":"ResourceName", @@ -1285,7 +1636,7 @@ }, "availabilityZone":{ "shape":"string", - "documentation":"

The Availability Zone in which to create your instance. Use the following format: us-east-1a (case sensitive). You can get a list of availability zones by using the get regions operation. Be sure to add the include availability zones parameter to your request.

" + "documentation":"

The Availability Zone in which to create your instance. Use the following format: us-east-2a (case sensitive). You can get a list of availability zones by using the get regions operation. Be sure to add the include availability zones parameter to your request.

" }, "customImageName":{ "shape":"ResourceName", @@ -1302,7 +1653,7 @@ }, "userData":{ "shape":"string", - "documentation":"

A launch script you can create that configures a server with additional user data. For example, you might want to run apt-get –y update.

Depending on the machine image you choose, the command to get software on your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide.

" + "documentation":"

A launch script you can create that configures a server with additional user data. For example, you might want to run apt-get –y update.

Depending on the machine image you choose, the command to get software on your instance varies. Amazon Linux and CentOS use yum, Debian and Ubuntu use apt-get, and FreeBSD uses pkg. For a complete list, see the Dev Guide.

" }, "keyPairName":{ "shape":"ResourceName", @@ -1350,6 +1701,44 @@ } } }, + "DeleteDiskRequest":{ + "type":"structure", + "required":["diskName"], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique name of the disk you want to delete (e.g., my-disk).

" + } + } + }, + "DeleteDiskResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, + "DeleteDiskSnapshotRequest":{ + "type":"structure", + "required":["diskSnapshotName"], + "members":{ + "diskSnapshotName":{ + "shape":"ResourceName", + "documentation":"

The name of the disk snapshot you want to delete (e.g., my-disk-snapshot).

" + } + } + }, + "DeleteDiskSnapshotResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, "DeleteDomainEntryRequest":{ "type":"structure", "required":[ @@ -1452,6 +1841,25 @@ } } }, + "DetachDiskRequest":{ + "type":"structure", + "required":["diskName"], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The unique name of the disk you want to detach from your instance (e.g., my-disk).

" + } + } + }, + "DetachDiskResult":{ + "type":"structure", + "members":{ + "operations":{ + "shape":"OperationList", + "documentation":"

An object describing the API operations.

" + } + } + }, "DetachStaticIpRequest":{ "type":"structure", "required":["staticIpName"], @@ -1476,7 +1884,7 @@ "members":{ "name":{ "shape":"ResourceName", - "documentation":"

The name of the disk.

" + "documentation":"

The unique name of the disk.

" }, "arn":{ "shape":"NonEmptyString", @@ -1492,20 +1900,16 @@ }, "location":{ "shape":"ResourceLocation", - "documentation":"

The region and Availability Zone where the disk is located.

" + "documentation":"

The AWS Region and Availability Zone where the disk is located.

" }, "resourceType":{ "shape":"ResourceType", - "documentation":"

The resource type of the disk.

" + "documentation":"

The Lightsail resource type (e.g., Disk).

" }, "sizeInGb":{ "shape":"integer", "documentation":"

The size of the disk in GB.

" }, - "gbInUse":{ - "shape":"integer", - "documentation":"

The number of GB in use by the disk.

" - }, "isSystemDisk":{ "shape":"boolean", "documentation":"

A Boolean value indicating whether this disk is a system disk (has an operating system loaded on it).

" @@ -1518,8 +1922,12 @@ "shape":"string", "documentation":"

The disk path.

" }, + "state":{ + "shape":"DiskState", + "documentation":"

Describes the status of the disk.

" + }, "attachedTo":{ - "shape":"string", + "shape":"ResourceName", "documentation":"

The resources to which the disk is attached.

" }, "isAttached":{ @@ -1528,15 +1936,112 @@ }, "attachmentState":{ "shape":"string", - "documentation":"

The attachment state of the disk.

" + "documentation":"

(Deprecated) The attachment state of the disk.

In releases prior to November 9, 2017, this parameter returned attached for system disks in the API response. It is now deprecated, but still included in the response. Use isAttached instead.

", + "deprecated":true + }, + "gbInUse":{ + "shape":"integer", + "documentation":"

(Deprecated) The number of GB in use by the disk.

In releases prior to November 9, 2017, this parameter was not included in the API response. It is now deprecated.

", + "deprecated":true } }, - "documentation":"

Describes the hard disk (an SSD).

" + "documentation":"

Describes a system disk or an block storage disk.

" }, "DiskList":{ "type":"list", "member":{"shape":"Disk"} }, + "DiskMap":{ + "type":"structure", + "members":{ + "originalDiskPath":{ + "shape":"NonEmptyString", + "documentation":"

The original disk path exposed to the instance (for example, /dev/sdh).

" + }, + "newDiskName":{ + "shape":"ResourceName", + "documentation":"

The new disk name (e.g., my-new-disk).

" + } + }, + "documentation":"

Describes a block storage disk mapping.

" + }, + "DiskMapList":{ + "type":"list", + "member":{"shape":"DiskMap"} + }, + "DiskSnapshot":{ + "type":"structure", + "members":{ + "name":{ + "shape":"ResourceName", + "documentation":"

The name of the disk snapshot (e.g., my-disk-snapshot).

" + }, + "arn":{ + "shape":"NonEmptyString", + "documentation":"

The Amazon Resource Name (ARN) of the disk snapshot.

" + }, + "supportCode":{ + "shape":"string", + "documentation":"

The support code. Include this code in your email to support when you have questions about an instance or another resource in Lightsail. This code enables our support team to look up your Lightsail information more easily.

" + }, + "createdAt":{ + "shape":"IsoDate", + "documentation":"

The date when the disk snapshot was created.

" + }, + "location":{ + "shape":"ResourceLocation", + "documentation":"

The AWS Region and Availability Zone where the disk snapshot was created.

" + }, + "resourceType":{ + "shape":"ResourceType", + "documentation":"

The Lightsail resource type (e.g., DiskSnapshot).

" + }, + "sizeInGb":{ + "shape":"integer", + "documentation":"

The size of the disk in GB.

" + }, + "state":{ + "shape":"DiskSnapshotState", + "documentation":"

The status of the disk snapshot operation.

" + }, + "progress":{ + "shape":"string", + "documentation":"

The progress of the disk snapshot operation.

" + }, + "fromDiskName":{ + "shape":"ResourceName", + "documentation":"

The unique name of the source disk from which you are creating the disk snapshot.

" + }, + "fromDiskArn":{ + "shape":"NonEmptyString", + "documentation":"

The Amazon Resource Name (ARN) of the source disk from which you are creating the disk snapshot.

" + } + }, + "documentation":"

Describes a block storage disk snapshot.

" + }, + "DiskSnapshotList":{ + "type":"list", + "member":{"shape":"DiskSnapshot"} + }, + "DiskSnapshotState":{ + "type":"string", + "enum":[ + "pending", + "completed", + "error", + "unknown" + ] + }, + "DiskState":{ + "type":"string", + "enum":[ + "pending", + "error", + "available", + "in-use", + "unknown" + ] + }, "Domain":{ "type":"structure", "members":{ @@ -1705,6 +2210,88 @@ } } }, + "GetDiskRequest":{ + "type":"structure", + "required":["diskName"], + "members":{ + "diskName":{ + "shape":"ResourceName", + "documentation":"

The name of the disk (e.g., my-disk).

" + } + } + }, + "GetDiskResult":{ + "type":"structure", + "members":{ + "disk":{ + "shape":"Disk", + "documentation":"

An object containing information about the disk.

" + } + } + }, + "GetDiskSnapshotRequest":{ + "type":"structure", + "required":["diskSnapshotName"], + "members":{ + "diskSnapshotName":{ + "shape":"ResourceName", + "documentation":"

The name of the disk snapshot (e.g., my-disk-snapshot).

" + } + } + }, + "GetDiskSnapshotResult":{ + "type":"structure", + "members":{ + "diskSnapshot":{ + "shape":"DiskSnapshot", + "documentation":"

An object containing information about the disk snapshot.

" + } + } + }, + "GetDiskSnapshotsRequest":{ + "type":"structure", + "members":{ + "pageToken":{ + "shape":"string", + "documentation":"

A token used for advancing to the next page of results from your GetDiskSnapshots request.

" + } + } + }, + "GetDiskSnapshotsResult":{ + "type":"structure", + "members":{ + "diskSnapshots":{ + "shape":"DiskSnapshotList", + "documentation":"

An array of objects containing information about all block storage disk snapshots.

" + }, + "nextPageToken":{ + "shape":"string", + "documentation":"

A token used for advancing to the next page of results from your GetDiskSnapshots request.

" + } + } + }, + "GetDisksRequest":{ + "type":"structure", + "members":{ + "pageToken":{ + "shape":"string", + "documentation":"

A token used for advancing to the next page of results from your GetDisks request.

" + } + } + }, + "GetDisksResult":{ + "type":"structure", + "members":{ + "disks":{ + "shape":"DiskList", + "documentation":"

An array of objects containing information about all block storage disks.

" + }, + "nextPageToken":{ + "shape":"string", + "documentation":"

A token used for advancing to the next page of results from your GetDisks request.

" + } + } + }, "GetDomainRequest":{ "type":"structure", "required":["domainName"], @@ -2063,7 +2650,7 @@ "members":{ "includeAvailabilityZones":{ "shape":"boolean", - "documentation":"

A Boolean value indicating whether to also include Availability Zones in your get regions request. Availability Zones are indicated with a letter: e.g., us-east-1a.

" + "documentation":"

A Boolean value indicating whether to also include Availability Zones in your get regions request. Availability Zones are indicated with a letter: e.g., us-east-2a.

" } } }, @@ -2148,11 +2735,11 @@ "members":{ "name":{ "shape":"ResourceName", - "documentation":"

The name the user gave the instance (e.g., Amazon_Linux-1GB-Virginia-1).

" + "documentation":"

The name the user gave the instance (e.g., Amazon_Linux-1GB-Ohio-1).

" }, "arn":{ "shape":"NonEmptyString", - "documentation":"

The Amazon Resource Name (ARN) of the instance (e.g., arn:aws:lightsail:us-east-1:123456789101:Instance/244ad76f-8aad-4741-809f-12345EXAMPLE).

" + "documentation":"

The Amazon Resource Name (ARN) of the instance (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/244ad76f-8aad-4741-809f-12345EXAMPLE).

" }, "supportCode":{ "shape":"string", @@ -2238,7 +2825,11 @@ }, "password":{ "shape":"string", - "documentation":"

For RDP access, the temporary password of the Amazon EC2 instance.

" + "documentation":"

For RDP access, the password for your Amazon Lightsail instance. Password will be an empty string if the password for your new instance is not ready yet. When you create an instance, it can take up to 15 minutes for the instance to be ready.

If you create an instance using any key pair other than the default (LightsailDefaultKeyPair), password will always be an empty string.

If you change the Administrator password on the instance, Lightsail will continue to return the original password value. When accessing the instance using RDP, you need to manually enter the Administrator password after changing it from the default.

" + }, + "passwordData":{ + "shape":"PasswordData", + "documentation":"

For a Windows Server-based instance, an object with the data you can use to retrieve your password. This is only needed if password is empty and the instance is not new (and therefore the password is not ready yet). When you create an instance, it can take up to 15 minutes for the instance to be ready.

" }, "privateKey":{ "shape":"string", @@ -2313,6 +2904,17 @@ }, "documentation":"

Describes monthly data transfer rates and port information for an instance.

" }, + "InstancePlatform":{ + "type":"string", + "enum":[ + "LINUX_UNIX", + "WINDOWS" + ] + }, + "InstancePlatformList":{ + "type":"list", + "member":{"shape":"InstancePlatform"} + }, "InstancePortInfo":{ "type":"structure", "members":{ @@ -2386,7 +2988,7 @@ }, "arn":{ "shape":"NonEmptyString", - "documentation":"

The Amazon Resource Name (ARN) of the snapshot (e.g., arn:aws:lightsail:us-east-1:123456789101:InstanceSnapshot/d23b5706-3322-4d83-81e5-12345EXAMPLE).

" + "documentation":"

The Amazon Resource Name (ARN) of the snapshot (e.g., arn:aws:lightsail:us-east-2:123456789101:InstanceSnapshot/d23b5706-3322-4d83-81e5-12345EXAMPLE).

" }, "supportCode":{ "shape":"string", @@ -2412,13 +3014,17 @@ "shape":"string", "documentation":"

The progress of the snapshot.

" }, + "fromAttachedDisks":{ + "shape":"DiskList", + "documentation":"

An array of disk objects containing information about all block storage disks.

" + }, "fromInstanceName":{ "shape":"ResourceName", "documentation":"

The instance from which the snapshot was created.

" }, "fromInstanceArn":{ "shape":"NonEmptyString", - "documentation":"

The Amazon Resource Name (ARN) of the instance from which the snapshot was created (e.g., arn:aws:lightsail:us-east-1:123456789101:Instance/64b8404c-ccb1-430b-8daf-12345EXAMPLE).

" + "documentation":"

The Amazon Resource Name (ARN) of the instance from which the snapshot was created (e.g., arn:aws:lightsail:us-east-2:123456789101:Instance/64b8404c-ccb1-430b-8daf-12345EXAMPLE).

" }, "fromBlueprintId":{ "shape":"string", @@ -2504,7 +3110,7 @@ }, "arn":{ "shape":"NonEmptyString", - "documentation":"

The Amazon Resource Name (ARN) of the key pair (e.g., arn:aws:lightsail:us-east-1:123456789101:KeyPair/05859e3d-331d-48ba-9034-12345EXAMPLE).

" + "documentation":"

The Amazon Resource Name (ARN) of the key pair (e.g., arn:aws:lightsail:us-east-2:123456789101:KeyPair/05859e3d-331d-48ba-9034-12345EXAMPLE).

" }, "supportCode":{ "shape":"string", @@ -2710,7 +3316,7 @@ }, "operationDetails":{ "shape":"string", - "documentation":"

Details about the operation (e.g., Debian-1GB-Virginia-1).

" + "documentation":"

Details about the operation (e.g., Debian-1GB-Ohio-1).

" }, "operationType":{ "shape":"OperationType", @@ -2780,9 +3386,30 @@ "DeleteDomain", "CreateInstanceSnapshot", "DeleteInstanceSnapshot", - "CreateInstancesFromSnapshot" + "CreateInstancesFromSnapshot", + "CreateDisk", + "DeleteDisk", + "AttachDisk", + "DetachDisk", + "CreateDiskSnapshot", + "DeleteDiskSnapshot", + "CreateDiskFromSnapshot" ] }, + "PasswordData":{ + "type":"structure", + "members":{ + "ciphertext":{ + "shape":"string", + "documentation":"

The encrypted password. Ciphertext will be an empty string if access to your new instance is not ready yet. When you create an instance, it can take up to 15 minutes for the instance to be ready.

If you use the default key pair (LightsailDefaultKeyPair), the decrypted password will be available in the password field.

If you are using a custom key pair, you need to use your own means of decryption.

If you change the Administrator password on the instance, Lightsail will continue to return the original ciphertext value. When accessing the instance using RDP, you need to manually enter the Administrator password after changing it from the default.

" + }, + "keyPairName":{ + "shape":"ResourceName", + "documentation":"

The name of the key pair that you used when creating your instance. If no key pair name was specified when creating the instance, Lightsail uses the default key pair (LightsailDefaultKeyPair).

If you are using a custom key pair, you need to use your own means of decrypting your password using the ciphertext. Lightsail creates the ciphertext by encrypting your password with the public key part of this key pair.

" + } + }, + "documentation":"

The password data for the Windows Server-based instance, including the ciphertext and the key pair name.

" + }, "PeerVpcRequest":{ "type":"structure", "members":{ @@ -2896,15 +3523,15 @@ }, "displayName":{ "shape":"string", - "documentation":"

The display name (e.g., Virginia).

" + "documentation":"

The display name (e.g., Ohio).

" }, "name":{ "shape":"RegionName", - "documentation":"

The region name (e.g., us-east-1).

" + "documentation":"

The region name (e.g., us-east-2).

" }, "availabilityZones":{ "shape":"AvailabilityZoneList", - "documentation":"

The Availability Zones. Follows the format us-east-1a (case-sensitive).

" + "documentation":"

The Availability Zones. Follows the format us-east-2a (case-sensitive).

" } }, "documentation":"

Describes the AWS Region.

" @@ -2953,7 +3580,7 @@ "members":{ "availabilityZone":{ "shape":"string", - "documentation":"

The Availability Zone. Follows the format us-east-1a (case-sensitive).

" + "documentation":"

The Availability Zone. Follows the format us-east-2a (case-sensitive).

" }, "regionName":{ "shape":"RegionName", @@ -2974,7 +3601,9 @@ "KeyPair", "InstanceSnapshot", "Domain", - "PeeredVpc" + "PeeredVpc", + "Disk", + "DiskSnapshot" ] }, "ServiceException":{ @@ -3013,11 +3642,11 @@ "members":{ "name":{ "shape":"ResourceName", - "documentation":"

The name of the static IP (e.g., StaticIP-Virginia-EXAMPLE).

" + "documentation":"

The name of the static IP (e.g., StaticIP-Ohio-EXAMPLE).

" }, "arn":{ "shape":"NonEmptyString", - "documentation":"

The Amazon Resource Name (ARN) of the static IP (e.g., arn:aws:lightsail:us-east-1:123456789101:StaticIp/9cbb4a9e-f8e3-4dfe-b57e-12345EXAMPLE).

" + "documentation":"

The Amazon Resource Name (ARN) of the static IP (e.g., arn:aws:lightsail:us-east-2:123456789101:StaticIp/9cbb4a9e-f8e3-4dfe-b57e-12345EXAMPLE).

" }, "supportCode":{ "shape":"string", @@ -3041,7 +3670,7 @@ }, "attachedTo":{ "shape":"ResourceName", - "documentation":"

The instance where the static IP is attached (e.g., Amazon_Linux-1GB-Virginia-1).

" + "documentation":"

The instance where the static IP is attached (e.g., Amazon_Linux-1GB-Ohio-1).

" }, "isAttached":{ "shape":"boolean", @@ -3061,6 +3690,10 @@ "instanceName":{ "shape":"ResourceName", "documentation":"

The name of the instance (a virtual private server) to stop.

" + }, + "force":{ + "shape":"boolean", + "documentation":"

When set to True, forces a Lightsail instance that is stuck in a stopping state to stop.

Only use the force parameter if your instance is stuck in the stopping state. In any other state, your instance should stop normally without adding this parameter to your API request.

" } } }, diff --git a/services/logs/src/main/resources/codegen-resources/customization.config b/services/logs/src/main/resources/codegen-resources/customization.config index 4e2d312b258a..506aac1f2fc8 100644 --- a/services/logs/src/main/resources/codegen-resources/customization.config +++ b/services/logs/src/main/resources/codegen-resources/customization.config @@ -2,6 +2,10 @@ "authPolicyActions" : { "skip" : true }, + "blacklistedSimpleMethods" : [ + "deleteResourcePolicy", + "putResourcePolicy" + ], "simpleMethods" : { "DescribeDestinations" : { "methodForms" : [ [ ] ] diff --git a/services/logs/src/main/resources/codegen-resources/service-2.json b/services/logs/src/main/resources/codegen-resources/service-2.json index 3d60d0ec9ce8..2a8bc3e479b7 100644 --- a/services/logs/src/main/resources/codegen-resources/service-2.json +++ b/services/logs/src/main/resources/codegen-resources/service-2.json @@ -11,6 +11,21 @@ "uid":"logs-2014-03-28" }, "operations":{ + "AssociateKmsKey":{ + "name":"AssociateKmsKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AssociateKmsKeyRequest"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"OperationAbortedException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.

Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

Note that it can take up to 5 minutes for this operation to take effect.

If you attempt to associate a CMK with a log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException error.

" + }, "CancelExportTask":{ "name":"CancelExportTask", "http":{ @@ -42,7 +57,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ResourceAlreadyExistsException"} ], - "documentation":"

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix that will be used as the Amazon S3 key prefix for all exported objects.

" + "documentation":"

Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket.

This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING or PENDING) export task at a time. To cancel an export task, use CancelExportTask.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix to be used as the Amazon S3 key prefix for all exported objects.

" }, "CreateLogGroup":{ "name":"CreateLogGroup", @@ -58,7 +73,7 @@ {"shape":"OperationAbortedException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Creates a log group with the specified name.

You can create up to 5000 log groups per account.

You must use the following guidelines when naming a log group:

" + "documentation":"

Creates a log group with the specified name.

You can create up to 5000 log groups per account.

You must use the following guidelines when naming a log group:

If you associate a AWS Key Management Service (AWS KMS) customer master key (CMK) with the log group, ingested data is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.

If you attempt to associate a CMK with the log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException error.

" }, "CreateLogStream":{ "name":"CreateLogStream", @@ -135,6 +150,20 @@ ], "documentation":"

Deletes the specified metric filter.

" }, + "DeleteResourcePolicy":{ + "name":"DeleteResourcePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteResourcePolicyRequest"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Deletes a resource policy from this account. This revokes the access of the identities in that policy to put log events to this account.

" + }, "DeleteRetentionPolicy":{ "name":"DeleteRetentionPolicy", "http":{ @@ -235,7 +264,21 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Lists the specified metric filters. You can list all the metric filters or filter the results by log name, prefix, metric name, and metric namespace. The results are ASCII-sorted by filter name.

" + "documentation":"

Lists the specified metric filters. You can list all the metric filters or filter the results by log name, prefix, metric name, or metric namespace. The results are ASCII-sorted by filter name.

" + }, + "DescribeResourcePolicies":{ + "name":"DescribeResourcePolicies", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeResourcePoliciesRequest"}, + "output":{"shape":"DescribeResourcePoliciesResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Lists the resource policies in this account.

" }, "DescribeSubscriptionFilters":{ "name":"DescribeSubscriptionFilters", @@ -252,6 +295,21 @@ ], "documentation":"

Lists the subscription filters for the specified log group. You can list all the subscription filters or filter the results by prefix. The results are ASCII-sorted by filter name.

" }, + "DisassociateKmsKey":{ + "name":"DisassociateKmsKey", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisassociateKmsKeyRequest"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"OperationAbortedException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Disassociates the associated AWS Key Management Service (AWS KMS) customer master key (CMK) from the specified log group.

After the AWS KMS CMK is disassociated from the log group, AWS CloudWatch Logs stops encrypting newly ingested data for the log group. All previously ingested data remains encrypted, and AWS CloudWatch Logs requires permissions for the CMK whenever the encrypted data is requested.

Note that it can take up to 5 minutes for this operation to take effect.

" + }, "FilterLogEvents":{ "name":"FilterLogEvents", "http":{ @@ -265,7 +323,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

By default, this operation returns as many log events as can fit in 1MB (up to 10,000 log events), or all the events found within the time range that you specify. If the results include a token, then there are more log events available, and you can get additional results by specifying the token in a subsequent call.

" + "documentation":"

Lists log events from the specified log group. You can list all the log events or filter the results using a filter pattern, a time range, and the name of the log stream.

By default, this operation returns as many log events as can fit in 1 MB (up to 10,000 log events), or all the events found within the time range that you specify. If the results include a token, then there are more log events available, and you can get additional results by specifying the token in a subsequent call.

" }, "GetLogEvents":{ "name":"GetLogEvents", @@ -280,7 +338,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Lists log events from the specified log stream. You can list all the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). If the results include tokens, there are more log events available. You can get additional log events by specifying one of the tokens in a subsequent call.

" + "documentation":"

Lists log events from the specified log stream. You can list all the log events or filter using a time range.

By default, this operation returns as many log events as can fit in a response size of 1MB (up to 10,000 log events). You can get additional log events by specifying one of the tokens in a subsequent call.

" }, "ListTagsLogGroup":{ "name":"ListTagsLogGroup", @@ -294,7 +352,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Lists the tags for the specified log group.

To add tags, use TagLogGroup. To remove tags, use UntagLogGroup.

" + "documentation":"

Lists the tags for the specified log group.

" }, "PutDestination":{ "name":"PutDestination", @@ -309,7 +367,7 @@ {"shape":"OperationAbortedException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Creates or updates a destination. A destination encapsulates a physical resource (such as a Kinesis stream) and enables you to subscribe to a real-time stream of log events of a different account, ingested using PutLogEvents. Currently, the only supported physical resource is a Amazon Kinesis stream belonging to the same account as the destination.

A destination controls what is written to its Amazon Kinesis stream through an access policy. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

" + "documentation":"

Creates or updates a destination. A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents. Currently, the only supported physical resource is a Kinesis stream belonging to the same account as the destination.

Through an access policy, a destination controls what is written to its Kinesis stream. By default, PutDestination does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination.

" }, "PutDestinationPolicy":{ "name":"PutDestinationPolicy", @@ -340,7 +398,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Uploads a batch of log events to the specified log stream.

You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams.

The batch of events must satisfy the following constraints:

" + "documentation":"

Uploads a batch of log events to the specified log stream.

You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls may be successful, or one may be rejected.

The batch of events must satisfy the following constraints:

" }, "PutMetricFilter":{ "name":"PutMetricFilter", @@ -358,6 +416,21 @@ ], "documentation":"

Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents.

The maximum number of metric filters that can be associated with a log group is 100.

" }, + "PutResourcePolicy":{ + "name":"PutResourcePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutResourcePolicyRequest"}, + "output":{"shape":"PutResourcePolicyResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"LimitExceededException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 50 resource policies per region.

" + }, "PutRetentionPolicy":{ "name":"PutRetentionPolicy", "http":{ @@ -371,7 +444,7 @@ {"shape":"OperationAbortedException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Sets the retention of the specified log group. A retention policy allows you to configure the number of days you want to retain log events in the specified log group.

" + "documentation":"

Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group.

" }, "PutSubscriptionFilter":{ "name":"PutSubscriptionFilter", @@ -387,7 +460,7 @@ {"shape":"LimitExceededException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. Currently, the supported destinations are:

There can only be one subscription filter associated with a log group. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call will fail because you cannot associate a second filter with a log group.

" + "documentation":"

Creates or updates a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events ingested through PutLogEvents and have them delivered to a specific destination. Currently, the supported destinations are:

There can only be one subscription filter associated with a log group. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call fails because you cannot associate a second filter with a log group.

" }, "TagLogGroup":{ "name":"TagLogGroup", @@ -435,6 +508,23 @@ "min":1 }, "Arn":{"type":"string"}, + "AssociateKmsKeyRequest":{ + "type":"structure", + "required":[ + "logGroupName", + "kmsKeyId" + ], + "members":{ + "logGroupName":{ + "shape":"LogGroupName", + "documentation":"

The name of the log group.

" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"

The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS).

" + } + } + }, "CancelExportTaskRequest":{ "type":"structure", "required":["taskId"], @@ -468,11 +558,11 @@ }, "from":{ "shape":"Timestamp", - "documentation":"

The start time of the range for the request, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp earlier than this time are not exported.

" + "documentation":"

The start time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp earlier than this time are not exported.

" }, "to":{ "shape":"Timestamp", - "documentation":"

The end time of the range for the request, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not exported.

" + "documentation":"

The end time of the range for the request, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are not exported.

" }, "destination":{ "shape":"ExportDestinationBucket", @@ -501,6 +591,10 @@ "shape":"LogGroupName", "documentation":"

The name of the log group.

" }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"

The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS).

" + }, "tags":{ "shape":"Tags", "documentation":"

The key-value pairs to use for the tags.

" @@ -591,6 +685,15 @@ } } }, + "DeleteResourcePolicyRequest":{ + "type":"structure", + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"

The name of the policy to be revoked. This parameter is required.

" + } + } + }, "DeleteRetentionPolicyRequest":{ "type":"structure", "required":["logGroupName"], @@ -719,11 +822,11 @@ }, "logStreamNamePrefix":{ "shape":"LogStreamName", - "documentation":"

The prefix to match.

You cannot specify this parameter if orderBy is LastEventTime.

" + "documentation":"

The prefix to match.

iIf orderBy is LastEventTime,you cannot specify this parameter.

" }, "orderBy":{ "shape":"OrderBy", - "documentation":"

If the value is LogStreamName, the results are ordered by log stream name. If the value is LastEventTime, the results are ordered by the event time. The default value is LogStreamName.

If you order the results by event time, you cannot specify the logStreamNamePrefix parameter.

lastEventTimestamp represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. lastEventTimeStamp updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but may take longer in some rare situations.

" + "documentation":"

If the value is LogStreamName, the results are ordered by log stream name. If the value is LastEventTime, the results are ordered by the event time. The default value is LogStreamName.

If you order the results by event time, you cannot specify the logStreamNamePrefix parameter.

lastEventTimestamp represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTimeStamp updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but may take longer in some rare situations.

" }, "descending":{ "shape":"Descending", @@ -768,10 +871,7 @@ "shape":"DescribeLimit", "documentation":"

The maximum number of items returned. If you don't specify a value, the default is up to 50 items.

" }, - "metricName":{ - "shape":"MetricName", - "documentation":"

The name of the CloudWatch metric.

" - }, + "metricName":{"shape":"MetricName"}, "metricNamespace":{ "shape":"MetricNamespace", "documentation":"

The namespace of the CloudWatch metric.

" @@ -788,6 +888,26 @@ "nextToken":{"shape":"NextToken"} } }, + "DescribeResourcePoliciesRequest":{ + "type":"structure", + "members":{ + "nextToken":{"shape":"NextToken"}, + "limit":{ + "shape":"DescribeLimit", + "documentation":"

The maximum number of resource policies to be displayed with one call of this API.

" + } + } + }, + "DescribeResourcePoliciesResponse":{ + "type":"structure", + "members":{ + "resourcePolicies":{ + "shape":"ResourcePolicies", + "documentation":"

The resource policies that exist in this account.

" + }, + "nextToken":{"shape":"NextToken"} + } + }, "DescribeSubscriptionFiltersRequest":{ "type":"structure", "required":["logGroupName"], @@ -829,7 +949,7 @@ }, "targetArn":{ "shape":"TargetArn", - "documentation":"

The Amazon Resource Name (ARN) of the physical target where the log events will be delivered (for example, a Kinesis stream).

" + "documentation":"

The Amazon Resource Name (ARN) of the physical target to where the log events are delivered (for example, a Kinesis stream).

" }, "roleArn":{ "shape":"RoleArn", @@ -845,7 +965,7 @@ }, "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the destination, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the destination, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" } }, "documentation":"

Represents a cross-account destination that receives subscription log events.

" @@ -864,8 +984,19 @@ "type":"list", "member":{"shape":"Destination"} }, + "DisassociateKmsKeyRequest":{ + "type":"structure", + "required":["logGroupName"], + "members":{ + "logGroupName":{ + "shape":"LogGroupName", + "documentation":"

The name of the log group.

" + } + } + }, "Distribution":{ "type":"string", + "documentation":"

The method used to distribute log data to the destination, which can be either random or grouped by log stream.

", "enum":[ "Random", "ByLogStream" @@ -905,11 +1036,11 @@ }, "from":{ "shape":"Timestamp", - "documentation":"

The start time, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp prior to this time are not exported.

" + "documentation":"

The start time, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp before this time are not exported.

" }, "to":{ "shape":"Timestamp", - "documentation":"

The end time, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not exported.

" + "documentation":"

The end time, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are not exported.

" }, "destination":{ "shape":"ExportDestinationBucket", @@ -935,11 +1066,11 @@ "members":{ "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the export task, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the export task, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "completionTime":{ "shape":"Timestamp", - "documentation":"

The completion time of the export task, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The completion time of the export task, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" } }, "documentation":"

Represents the status of an export task.

" @@ -1004,11 +1135,11 @@ }, "startTime":{ "shape":"Timestamp", - "documentation":"

The start of the time range, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp prior to this time are not returned.

" + "documentation":"

The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp before this time are not returned.

" }, "endTime":{ "shape":"Timestamp", - "documentation":"

The end of the time range, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not returned.

" + "documentation":"

The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are not returned.

" }, "filterPattern":{ "shape":"FilterPattern", @@ -1024,7 +1155,7 @@ }, "interleaved":{ "shape":"Interleaved", - "documentation":"

If the value is true, the operation makes a best effort to provide responses that contain events from multiple log streams within the log group interleaved in a single response. If the value is false all the matched log events in the first log stream are searched first, then those in the next log stream, and so on. The default is false.

" + "documentation":"

If the value is true, the operation makes a best effort to provide responses that contain events from multiple log streams within the log group, interleaved in a single response. If the value is false, all the matched log events in the first log stream are searched first, then those in the next log stream, and so on. The default is false.

" } } }, @@ -1053,7 +1184,7 @@ }, "FilterPattern":{ "type":"string", - "documentation":"

A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event may contain timestamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.

", + "documentation":"

A symbolic description of how CloudWatch Logs should interpret the data in each log event. For example, a log event may contain time stamps, IP addresses, strings, and so on. You use the filter pattern to specify what to look for in the log event message.

", "max":1024, "min":0 }, @@ -1066,7 +1197,7 @@ }, "timestamp":{ "shape":"Timestamp", - "documentation":"

The time the event occurred, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "message":{ "shape":"EventMessage", @@ -1074,7 +1205,7 @@ }, "ingestionTime":{ "shape":"Timestamp", - "documentation":"

The time the event was ingested, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time the event was ingested, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "eventId":{ "shape":"EventId", @@ -1104,11 +1235,11 @@ }, "startTime":{ "shape":"Timestamp", - "documentation":"

The start of the time range, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp earlier than this time are not included.

" + "documentation":"

The start of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp earlier than this time are not included.

" }, "endTime":{ "shape":"Timestamp", - "documentation":"

The end of the time range, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. Events with a timestamp later than this time are not included.

" + "documentation":"

The end of the time range, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are not included.

" }, "nextToken":{ "shape":"NextToken", @@ -1116,7 +1247,7 @@ }, "limit":{ "shape":"EventsLimit", - "documentation":"

The maximum number of log events returned. If you don't specify a value, the maximum is as many log events as can fit in a response size of 1MB, up to 10,000 log events.

" + "documentation":"

The maximum number of log events returned. If you don't specify a value, the maximum is as many log events as can fit in a response size of 1 MB, up to 10,000 log events.

" }, "startFromHead":{ "shape":"StartFromHead", @@ -1150,7 +1281,7 @@ "members":{ "timestamp":{ "shape":"Timestamp", - "documentation":"

The time the event occurred, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time the event occurred, expressed as the number of milliseconds fter Jan 1, 1970 00:00:00 UTC.

" }, "message":{ "shape":"EventMessage", @@ -1194,6 +1325,10 @@ "documentation":"

The sequence token is not valid.

", "exception":true }, + "KmsKeyId":{ + "type":"string", + "max":256 + }, "LimitExceededException":{ "type":"structure", "members":{ @@ -1216,7 +1351,7 @@ "members":{ "tags":{ "shape":"Tags", - "documentation":"

The tags.

" + "documentation":"

The tags for the log group.

" } } }, @@ -1230,7 +1365,7 @@ }, "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the log group, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the log group, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "retentionInDays":{"shape":"Days"}, "metricFilterCount":{ @@ -1244,6 +1379,10 @@ "storedBytes":{ "shape":"StoredBytes", "documentation":"

The number of bytes stored.

" + }, + "kmsKeyId":{ + "shape":"KmsKeyId", + "documentation":"

The Amazon Resource Name (ARN) of the CMK to use when encrypting log data.

" } }, "documentation":"

Represents a log group.

" @@ -1267,19 +1406,19 @@ }, "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the stream, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the stream, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "firstEventTimestamp":{ "shape":"Timestamp", - "documentation":"

The time of the first event, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time of the first event, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "lastEventTimestamp":{ "shape":"Timestamp", - "documentation":"

the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. lastEventTime updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but may take longer in some rare situations.

" + "documentation":"

the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTime updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but may take longer in some rare situations.

" }, "lastIngestionTime":{ "shape":"Timestamp", - "documentation":"

The ingestion time, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The ingestion time, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "uploadSequenceToken":{ "shape":"SequenceToken", @@ -1321,7 +1460,7 @@ }, "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the metric filter, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the metric filter, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "logGroupName":{ "shape":"LogGroupName", @@ -1392,7 +1531,7 @@ "documentation":"

(Optional) The value to emit when a filter pattern does not match a log event. This value can be null.

" } }, - "documentation":"

Indicates how to transform ingested log events into metric data in a CloudWatch metric.

" + "documentation":"

Indicates how to transform ingested log events in to metric data in a CloudWatch metric.

" }, "MetricTransformations":{ "type":"list", @@ -1429,7 +1568,7 @@ "members":{ "timestamp":{ "shape":"Timestamp", - "documentation":"

The time the event occurred, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" }, "message":{ "shape":"EventMessage", @@ -1437,7 +1576,7 @@ }, "ingestionTime":{ "shape":"Timestamp", - "documentation":"

The time the event was ingested, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The time the event was ingested, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" } }, "documentation":"

Represents a log event.

" @@ -1446,6 +1585,12 @@ "type":"list", "member":{"shape":"OutputLogEvent"} }, + "PolicyDocument":{ + "type":"string", + "max":5120, + "min":1 + }, + "PolicyName":{"type":"string"}, "PutDestinationPolicyRequest":{ "type":"structure", "required":[ @@ -1477,11 +1622,11 @@ }, "targetArn":{ "shape":"TargetArn", - "documentation":"

The ARN of an Amazon Kinesis stream to deliver matching log events to.

" + "documentation":"

The ARN of an Amazon Kinesis stream to which to deliver matching log events.

" }, "roleArn":{ "shape":"RoleArn", - "documentation":"

The ARN of an IAM role that grants CloudWatch Logs permissions to call Amazon Kinesis PutRecord on the destination stream.

" + "documentation":"

The ARN of an IAM role that grants CloudWatch Logs permissions to call the Amazon Kinesis PutRecord operation on the destination stream.

" } } }, @@ -1516,7 +1661,7 @@ }, "sequenceToken":{ "shape":"SequenceToken", - "documentation":"

The sequence token.

" + "documentation":"

The sequence token obtained from the response of the previous PutLogEvents call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams. If you call PutLogEvents twice within a narrow time period using the same value for sequenceToken, both calls may be successful, or one may be rejected.

" } } }, @@ -1556,7 +1701,29 @@ }, "metricTransformations":{ "shape":"MetricTransformations", - "documentation":"

A collection of information needed to define how metric data gets emitted.

" + "documentation":"

A collection of information that defines how metric data gets emitted.

" + } + } + }, + "PutResourcePolicyRequest":{ + "type":"structure", + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"

Name of the new policy. This parameter is required.

" + }, + "policyDocument":{ + "shape":"PolicyDocument", + "documentation":"

Details of the new policy, including the identity of the principal that is enabled to put logs to this account. This is formatted as a JSON string.

The following example creates a resource policy enabling the Route 53 service to put DNS query logs in to the specified log group. Replace \"logArn\" with the ARN of your CloudWatch Logs resource, such as a log group or log stream.

{ \"Version\": \"2012-10-17\" \"Statement\": [ { \"Sid\": \"Route53LogsToCloudWatchLogs\", \"Effect\": \"Allow\", \"Principal\": { \"Service\": [ \"route53.amazonaws.com\" ] }, \"Action\":\"logs:PutLogEvents\", \"Resource\": logArn } ] }

" + } + } + }, + "PutResourcePolicyResponse":{ + "type":"structure", + "members":{ + "resourcePolicy":{ + "shape":"ResourcePolicy", + "documentation":"

The new policy.

" } } }, @@ -1589,7 +1756,7 @@ }, "filterName":{ "shape":"FilterName", - "documentation":"

A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call will fail because you cannot associate a second filter with a log group. To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters.

" + "documentation":"

A name for the subscription filter. If you are updating an existing filter, you must specify the correct name in filterName. Otherwise, the call fails because you cannot associate a second filter with a log group. To find the name of the filter currently associated with a log group, use DescribeSubscriptionFilters.

" }, "filterPattern":{ "shape":"FilterPattern", @@ -1597,7 +1764,7 @@ }, "destinationArn":{ "shape":"DestinationArn", - "documentation":"

The ARN of the destination to deliver matching log events to. Currently, the supported destinations are:

" + "documentation":"

The ARN of the destination to deliver matching log events to. Currently, the supported destinations are:

" }, "roleArn":{ "shape":"RoleArn", @@ -1605,7 +1772,7 @@ }, "distribution":{ "shape":"Distribution", - "documentation":"

The method used to distribute log data to the destination, when the destination is an Amazon Kinesis stream. By default, log data is grouped by log stream. For a more even distribution, you can group log data randomly.

" + "documentation":"

The method used to distribute log data to the destination. By default log data is grouped by log stream, but the grouping can be set to random for a more even distribution. This property is only applicable when the destination is an Amazon Kinesis stream.

" } } }, @@ -1641,6 +1808,28 @@ "documentation":"

The specified resource does not exist.

", "exception":true }, + "ResourcePolicies":{ + "type":"list", + "member":{"shape":"ResourcePolicy"} + }, + "ResourcePolicy":{ + "type":"structure", + "members":{ + "policyName":{ + "shape":"PolicyName", + "documentation":"

The name of the resource policy.

" + }, + "policyDocument":{ + "shape":"PolicyDocument", + "documentation":"

The details of the policy.

" + }, + "lastUpdatedTime":{ + "shape":"Timestamp", + "documentation":"

Time stamp showing when this policy was last updated, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" + } + }, + "documentation":"

A policy enabling one or more entities to put logs to a log group in this account.

" + }, "RoleArn":{ "type":"string", "min":1 @@ -1700,13 +1889,10 @@ "shape":"RoleArn", "documentation":"

" }, - "distribution":{ - "shape":"Distribution", - "documentation":"

The method used to distribute log data to the destination, when the destination is an Amazon Kinesis stream.

" - }, + "distribution":{"shape":"Distribution"}, "creationTime":{ "shape":"Timestamp", - "documentation":"

The creation time of the subscription filter, expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC.

" + "documentation":"

The creation time of the subscription filter, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.

" } }, "documentation":"

Represents a subscription filter.

" @@ -1812,5 +1998,5 @@ }, "Value":{"type":"string"} }, - "documentation":"

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from EC2 instances, Amazon CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the Amazon CloudWatch console, the CloudWatch Logs commands in the AWS CLI, the CloudWatch Logs API, or the CloudWatch Logs SDK.

You can use CloudWatch Logs to:

" + "documentation":"

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK.

You can use CloudWatch Logs to:

" } diff --git a/services/marketplacecommerceanalytics/src/main/resources/codegen-resources/service-2.json b/services/marketplacecommerceanalytics/src/main/resources/codegen-resources/service-2.json index ce18c64a5c17..e61077006376 100644 --- a/services/marketplacecommerceanalytics/src/main/resources/codegen-resources/service-2.json +++ b/services/marketplacecommerceanalytics/src/main/resources/codegen-resources/service-2.json @@ -96,7 +96,7 @@ "members":{ "dataSetType":{ "shape":"DataSetType", - "documentation":"

The desired data set type.

" + "documentation":"

The desired data set type.

" }, "dataSetPublicationDate":{ "shape":"DataSetPublicationDate", diff --git a/services/mechanicalturkrequester/src/main/resources/codegen-resources/service-2.json b/services/mechanicalturkrequester/src/main/resources/codegen-resources/service-2.json index d2c7709561e2..3d613604bcc6 100644 --- a/services/mechanicalturkrequester/src/main/resources/codegen-resources/service-2.json +++ b/services/mechanicalturkrequester/src/main/resources/codegen-resources/service-2.json @@ -735,7 +735,7 @@ "shape":"CustomerId", "documentation":"

The ID of the Worker to whom the bonus was paid.

" }, - "BonusAmount":{"shape":"NumericValue"}, + "BonusAmount":{"shape":"CurrencyAmount"}, "AssignmentId":{ "shape":"EntityId", "documentation":"

The ID of the assignment associated with this bonus payment.

" @@ -778,7 +778,10 @@ }, "CreateAdditionalAssignmentsForHITRequest":{ "type":"structure", - "required":["HITId"], + "required":[ + "HITId", + "NumberOfAdditionalAssignments" + ], "members":{ "HITId":{ "shape":"EntityId", @@ -826,7 +829,7 @@ "documentation":"

The amount of time, in seconds, that a Worker has to complete the HIT after accepting it. If a Worker does not complete the assignment within the specified duration, the assignment is considered abandoned. If the HIT is still active (that is, its lifetime has not elapsed), the assignment becomes available for other users to find and accept.

" }, "Reward":{ - "shape":"NumericValue", + "shape":"CurrencyAmount", "documentation":"

The amount of money the Requester will pay a Worker for successfully completing the HIT.

" }, "Title":{ @@ -902,7 +905,7 @@ "documentation":"

The amount of time, in seconds, that a Worker has to complete the HIT after accepting it. If a Worker does not complete the assignment within the specified duration, the assignment is considered abandoned. If the HIT is still active (that is, its lifetime has not elapsed), the assignment becomes available for other users to find and accept.

" }, "Reward":{ - "shape":"NumericValue", + "shape":"CurrencyAmount", "documentation":"

The amount of money the Requester will pay a Worker for successfully completing the HIT.

" }, "Title":{ @@ -1071,6 +1074,11 @@ "members":{ } }, + "CurrencyAmount":{ + "type":"string", + "documentation":"

A string representing a currency amount.

", + "pattern":"^[0-9]+(\\.)?[0-9]{0,2}$" + }, "CustomerId":{ "type":"string", "max":64, @@ -1192,8 +1200,8 @@ "GetAccountBalanceResponse":{ "type":"structure", "members":{ - "AvailableBalance":{"shape":"NumericValue"}, - "OnHoldBalance":{"shape":"NumericValue"} + "AvailableBalance":{"shape":"CurrencyAmount"}, + "OnHoldBalance":{"shape":"CurrencyAmount"} } }, "GetAssignmentRequest":{ @@ -1356,7 +1364,7 @@ "shape":"Integer", "documentation":"

The number of times the HIT can be accepted and completed before the HIT becomes unavailable.

" }, - "Reward":{"shape":"NumericValue"}, + "Reward":{"shape":"CurrencyAmount"}, "AutoApprovalDelayInSeconds":{ "shape":"Long", "documentation":"

The amount of time, in seconds, after the Worker submits an assignment for the HIT that the results are automatically approved by Amazon Mechanical Turk. This is the amount of time the Requester has to reject an assignment submitted by a Worker before the assignment is auto-approved and the Worker is paid.

" @@ -1398,6 +1406,10 @@ }, "HITLayoutParameter":{ "type":"structure", + "required":[ + "Name", + "Value" + ], "members":{ "Name":{ "shape":"String", @@ -1808,16 +1820,18 @@ "type":"structure", "required":[ "Destination", - "Transport" + "Transport", + "Version", + "EventTypes" ], "members":{ "Destination":{ "shape":"String", - "documentation":"

The destination for notification messages. or email notifications (if Transport is Email), this is an email address. For Amazon Simple Queue Service (Amazon SQS) notifications (if Transport is SQS), this is the URL for your Amazon SQS queue.

" + "documentation":"

The target for notification messages. The Destination’s format is determined by the specified Transport:

" }, "Transport":{ "shape":"NotificationTransport", - "documentation":"

The method Amazon Mechanical Turk uses to send the notification. Valid Values: Email | SQS.

" + "documentation":"

The method Amazon Mechanical Turk uses to send the notification. Valid Values: Email | SQS | SNS.

" }, "Version":{ "shape":"String", @@ -1834,7 +1848,8 @@ "type":"string", "enum":[ "Email", - "SQS" + "SQS", + "SNS" ] }, "NotifyWorkersFailureCode":{ @@ -1897,11 +1912,6 @@ } } }, - "NumericValue":{ - "type":"string", - "documentation":"

A string representing a numeric value.

", - "pattern":"^[0-9]+(\\.)?[0-9]*$" - }, "PaginationToken":{ "type":"string", "documentation":"

If the previous response was incomplete (because there is more data to retrieve), Amazon Mechanical Turk returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.

", @@ -2125,7 +2135,10 @@ }, "RejectAssignmentRequest":{ "type":"structure", - "required":["AssignmentId"], + "required":[ + "AssignmentId", + "RequesterFeedback" + ], "members":{ "AssignmentId":{ "shape":"EntityId", @@ -2228,6 +2241,7 @@ }, "ReviewPolicy":{ "type":"structure", + "required":["PolicyName"], "members":{ "PolicyName":{ "shape":"String", @@ -2311,7 +2325,8 @@ "required":[ "WorkerId", "BonusAmount", - "AssignmentId" + "AssignmentId", + "Reason" ], "members":{ "WorkerId":{ @@ -2319,7 +2334,7 @@ "documentation":"

The ID of the Worker being paid the bonus.

" }, "BonusAmount":{ - "shape":"NumericValue", + "shape":"CurrencyAmount", "documentation":"

The Bonus amount is a US Dollar amount specified using a string (for example, \"5\" represents $5.00 USD and \"101.42\" represents $101.42 USD). Do not include currency symbols or currency codes.

" }, "AssignmentId":{ @@ -2382,7 +2397,10 @@ "TurkErrorCode":{"type":"string"}, "UpdateExpirationForHITRequest":{ "type":"structure", - "required":["HITId"], + "required":[ + "HITId", + "ExpireAt" + ], "members":{ "HITId":{ "shape":"EntityId", diff --git a/services/opsworkscm/src/main/resources/codegen-resources/service-2.json b/services/opsworkscm/src/main/resources/codegen-resources/service-2.json index 2ba5cb9c1e09..cbeee334a7e9 100644 --- a/services/opsworkscm/src/main/resources/codegen-resources/service-2.json +++ b/services/opsworkscm/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"OpsWorksCM", "serviceFullName":"AWS OpsWorks for Chef Automate", + "serviceId":"OpsWorksCM", "signatureVersion":"v4", "signingName":"opsworks-cm", "targetPrefix":"OpsWorksCM_V2016_11_01", @@ -26,7 +27,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ], - "documentation":"

Associates a new node with the Chef server. This command is an alternative to knife bootstrap. For more information about how to disassociate a node, see DisassociateNode.

A node can can only be associated with servers that are in a HEALTHY state. Otherwise, an InvalidStateException is thrown. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid. The AssociateNode API call can be integrated into Auto Scaling configurations, AWS Cloudformation templates, or the user data of a server's instance.

Example: aws opsworks-cm associate-node --server-name MyServer --node-name MyManagedNode --engine-attributes \"Name=MyOrganization,Value=default\" \"Name=Chef_node_public_key,Value=Public_key_contents\"

" + "documentation":"

Associates a new node with the server. For more information about how to disassociate a node, see DisassociateNode.

On a Chef server: This command is an alternative to knife bootstrap.

Example (Chef): aws opsworks-cm associate-node --server-name MyServer --node-name MyManagedNode --engine-attributes \"Name=CHEF_ORGANIZATION,Value=default\" \"Name=CHEF_NODE_PUBLIC_KEY,Value=public-key-pem\"

On a Puppet server, this command is an alternative to the puppet cert sign command that signs a Puppet node CSR.

Example (Chef): aws opsworks-cm associate-node --server-name MyServer --node-name MyManagedNode --engine-attributes \"Name=PUPPET_NODE_CSR,Value=csr-pem\"

A node can can only be associated with servers that are in a HEALTHY state. Otherwise, an InvalidStateException is thrown. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid. The AssociateNode API call can be integrated into Auto Scaling configurations, AWS Cloudformation templates, or the user data of a server's instance.

" }, "CreateBackup":{ "name":"CreateBackup", @@ -58,7 +59,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ], - "documentation":"

Creates and immedately starts a new server. The server is ready to use when it is in the HEALTHY state. By default, you can create a maximum of 10 servers.

This operation is asynchronous.

A LimitExceededException is thrown when you have created the maximum number of servers (10). A ResourceAlreadyExistsException is thrown when a server with the same name already exists in the account. A ResourceNotFoundException is thrown when you specify a backup ID that is not valid or is for a backup that does not exist. A ValidationException is thrown when parameters of the request are not valid.

If you do not specify a security group by adding the SecurityGroupIds parameter, AWS OpsWorks creates a new security group. The default security group opens the Chef server to the world on TCP port 443. If a KeyName is present, AWS OpsWorks enables SSH access. SSH is also open to the world on TCP port 22.

By default, the Chef Server is accessible from any IP address. We recommend that you update your security group rules to allow access from known IP addresses and address ranges only. To edit security group rules, open Security Groups in the navigation pane of the EC2 management console.

" + "documentation":"

Creates and immedately starts a new server. The server is ready to use when it is in the HEALTHY state. By default, you can create a maximum of 10 servers.

This operation is asynchronous.

A LimitExceededException is thrown when you have created the maximum number of servers (10). A ResourceAlreadyExistsException is thrown when a server with the same name already exists in the account. A ResourceNotFoundException is thrown when you specify a backup ID that is not valid or is for a backup that does not exist. A ValidationException is thrown when parameters of the request are not valid.

If you do not specify a security group by adding the SecurityGroupIds parameter, AWS OpsWorks creates a new security group.

Chef Automate: The default security group opens the Chef server to the world on TCP port 443. If a KeyName is present, AWS OpsWorks enables SSH access. SSH is also open to the world on TCP port 22.

Puppet Enterprise: The default security group opens TCP ports 22, 443, 4433, 8140, 8142, 8143, and 8170. If a KeyName is present, AWS OpsWorks enables SSH access. SSH is also open to the world on TCP port 22.

By default, your server is accessible from any IP address. We recommend that you update your security group rules to allow access from known IP addresses and address ranges only. To edit security group rules, open Security Groups in the navigation pane of the EC2 management console.

" }, "DeleteBackup":{ "name":"DeleteBackup", @@ -88,7 +89,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ], - "documentation":"

Deletes the server and the underlying AWS CloudFormation stack (including the server's EC2 instance). When you run this command, the server state is updated to DELETING. After the server is deleted, it is no longer returned by DescribeServer requests. If the AWS CloudFormation stack cannot be deleted, the server cannot be deleted.

This operation is asynchronous.

An InvalidStateException is thrown when a server deletion is already in progress. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" + "documentation":"

Deletes the server and the underlying AWS CloudFormation stacks (including the server's EC2 instance). When you run this command, the server state is updated to DELETING. After the server is deleted, it is no longer returned by DescribeServer requests. If the AWS CloudFormation stack cannot be deleted, the server cannot be deleted.

This operation is asynchronous.

An InvalidStateException is thrown when a server deletion is already in progress. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" }, "DescribeAccountAttributes":{ "name":"DescribeAccountAttributes", @@ -157,7 +158,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidNextTokenException"} ], - "documentation":"

Lists all configuration management servers that are identified with your account. Only the stored results from Amazon DynamoDB are returned. AWS OpsWorks for Chef Automate does not query other services.

This operation is synchronous.

A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" + "documentation":"

Lists all configuration management servers that are identified with your account. Only the stored results from Amazon DynamoDB are returned. AWS OpsWorks CM does not query other services.

This operation is synchronous.

A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" }, "DisassociateNode":{ "name":"DisassociateNode", @@ -172,7 +173,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ], - "documentation":"

Disassociates a node from a Chef server, and removes the node from the Chef server's managed nodes. After a node is disassociated, the node key pair is no longer valid for accessing the Chef API. For more information about how to associate a node, see AssociateNode.

A node can can only be disassociated from a server that is in a HEALTHY state. Otherwise, an InvalidStateException is thrown. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" + "documentation":"

Disassociates a node from an AWS OpsWorks CM server, and removes the node from the server's managed nodes. After a node is disassociated, the node key pair is no longer valid for accessing the configuration manager's API. For more information about how to associate a node, see AssociateNode.

A node can can only be disassociated from a server that is in a HEALTHY state. Otherwise, an InvalidStateException is thrown. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" }, "RestoreServer":{ "name":"RestoreServer", @@ -232,7 +233,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"ValidationException"} ], - "documentation":"

Updates engine-specific attributes on a specified server. The server enters the MODIFYING state when this operation is in progress. Only one update can occur at a time. You can use this command to reset the Chef server's private key (CHEF_PIVOTAL_KEY).

This operation is asynchronous.

This operation can only be called for servers in HEALTHY or UNHEALTHY states. Otherwise, an InvalidStateException is raised. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" + "documentation":"

Updates engine-specific attributes on a specified server. The server enters the MODIFYING state when this operation is in progress. Only one update can occur at a time. You can use this command to reset a Chef server's private key (CHEF_PIVOTAL_KEY), a Chef server's admin password (CHEF_DELIVERY_ADMIN_PASSWORD), or a Puppet server's admin password (PUPPET_ADMIN_PASSWORD).

This operation is asynchronous.

This operation can only be called for servers in HEALTHY or UNHEALTHY states. Otherwise, an InvalidStateException is raised. A ResourceNotFoundException is thrown when the server does not exist. A ValidationException is raised when parameters of the request are not valid.

" } }, "shapes":{ @@ -273,11 +274,11 @@ }, "NodeName":{ "shape":"NodeName", - "documentation":"

The name of the Chef client node.

" + "documentation":"

The name of the node.

" }, "EngineAttributes":{ "shape":"EngineAttributes", - "documentation":"

Engine attributes used for associating the node.

Attributes accepted in a AssociateNode request:

" + "documentation":"

Engine attributes used for associating the node.

Attributes accepted in a AssociateNode request for Chef

Attributes accepted in a AssociateNode request for Puppet

" } } }, @@ -392,7 +393,7 @@ }, "ToolsVersion":{ "shape":"String", - "documentation":"

The version of AWS OpsWorks for Chef Automate-specific tools that is obtained from the server when the backup is created.

" + "documentation":"

The version of AWS OpsWorks CM-specific tools that is obtained from the server when the backup is created.

" }, "UserArn":{ "shape":"String", @@ -472,23 +473,23 @@ }, "Engine":{ "shape":"String", - "documentation":"

The configuration management engine to use. Valid values include Chef.

" + "documentation":"

The configuration management engine to use. Valid values include Chef and Puppet.

" }, "EngineModel":{ "shape":"String", - "documentation":"

The engine model, or option. Valid values include Single.

" + "documentation":"

The engine model of the server. Valid values in this release include Monolithic for Puppet and Single for Chef.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The major release version of the engine that you want to use. Values depend on the engine that you choose.

" + "documentation":"

The major release version of the engine that you want to use. For a Chef server, the valid value for EngineVersion is currently 12. For a Puppet server, the valid value is 2017.

" }, "EngineAttributes":{ "shape":"EngineAttributes", - "documentation":"

Optional engine attributes on a specified server.

Attributes accepted in a createServer request:

" + "documentation":"

Optional engine attributes on a specified server.

Attributes accepted in a Chef createServer request:

Attributes accepted in a Puppet createServer request:

" }, "BackupRetentionCount":{ "shape":"BackupRetentionCountDefinition", - "documentation":"

The number of automated backups that you want to keep. Whenever a new backup is created, AWS OpsWorks for Chef Automate deletes the oldest backups if this number is exceeded. The default value is 1.

" + "documentation":"

The number of automated backups that you want to keep. Whenever a new backup is created, AWS OpsWorks CM deletes the oldest backups if this number is exceeded. The default value is 1.

" }, "ServerName":{ "shape":"ServerName", @@ -500,7 +501,7 @@ }, "InstanceType":{ "shape":"String", - "documentation":"

The Amazon EC2 instance type to use. Valid values must be specified in the following format: ^([cm][34]|t2).* For example, m4.large. Valid values are t2.medium, m4.large, or m4.2xlarge.

" + "documentation":"

The Amazon EC2 instance type to use. For example, m4.large. Recommended instance types include t2.medium and greater, m4.*, or c4.xlarge and greater.

" }, "KeyPair":{ "shape":"KeyPair", @@ -508,27 +509,27 @@ }, "PreferredMaintenanceWindow":{ "shape":"TimeWindowDefinition", - "documentation":"

The start time for a one-hour period each week during which AWS OpsWorks for Chef Automate performs maintenance on the instance. Valid values must be specified in the following format: DDD:HH:MM. The specified time is in coordinated universal time (UTC). The default value is a random one-hour period on Tuesday, Wednesday, or Friday. See TimeWindowDefinition for more information.

Example: Mon:08:00, which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)

" + "documentation":"

The start time for a one-hour period each week during which AWS OpsWorks CM performs maintenance on the instance. Valid values must be specified in the following format: DDD:HH:MM. The specified time is in coordinated universal time (UTC). The default value is a random one-hour period on Tuesday, Wednesday, or Friday. See TimeWindowDefinition for more information.

Example: Mon:08:00, which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)

" }, "PreferredBackupWindow":{ "shape":"TimeWindowDefinition", - "documentation":"

The start time for a one-hour period during which AWS OpsWorks for Chef Automate backs up application-level data on your server if automated backups are enabled. Valid values must be specified in one of the following formats:

The specified time is in coordinated universal time (UTC). The default value is a random, daily start time.

Example: 08:00, which represents a daily start time of 08:00 UTC.

Example: Mon:08:00, which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)

" + "documentation":"

The start time for a one-hour period during which AWS OpsWorks CM backs up application-level data on your server if automated backups are enabled. Valid values must be specified in one of the following formats:

The specified time is in coordinated universal time (UTC). The default value is a random, daily start time.

Example: 08:00, which represents a daily start time of 08:00 UTC.

Example: Mon:08:00, which represents a start time of every Monday at 08:00 UTC. (8:00 a.m.)

" }, "SecurityGroupIds":{ "shape":"Strings", - "documentation":"

A list of security group IDs to attach to the Amazon EC2 instance. If you add this parameter, the specified security groups must be within the VPC that is specified by SubnetIds.

If you do not specify this parameter, AWS OpsWorks for Chef Automate creates one new security group that uses TCP ports 22 and 443, open to 0.0.0.0/0 (everyone).

" + "documentation":"

A list of security group IDs to attach to the Amazon EC2 instance. If you add this parameter, the specified security groups must be within the VPC that is specified by SubnetIds.

If you do not specify this parameter, AWS OpsWorks CM creates one new security group that uses TCP ports 22 and 443, open to 0.0.0.0/0 (everyone).

" }, "ServiceRoleArn":{ "shape":"ServiceRoleArn", - "documentation":"

The service role that the AWS OpsWorks for Chef Automate service backend uses to work with your account. Although the AWS OpsWorks management console typically creates the service role for you, if you are using the AWS CLI or API commands, run the service-role-creation.yaml AWS CloudFormation template, located at https://s3.amazonaws.com/opsworks-stuff/latest/service-role-creation.yaml. This template creates a CloudFormation stack that includes the service role that you need.

" + "documentation":"

The service role that the AWS OpsWorks CM service backend uses to work with your account. Although the AWS OpsWorks management console typically creates the service role for you, if you are using the AWS CLI or API commands, run the service-role-creation.yaml AWS CloudFormation template, located at https://s3.amazonaws.com/opsworks-cm-us-east-1-prod-default-assets/misc/opsworks-cm-roles.yaml. This template creates a CloudFormation stack that includes the service role and instance profile that you need.

" }, "SubnetIds":{ "shape":"Strings", - "documentation":"

The IDs of subnets in which to launch the server EC2 instance.

Amazon EC2-Classic customers: This field is required. All servers must run within a VPC. The VPC must have \"Auto Assign Public IP\" enabled.

EC2-VPC customers: This field is optional. If you do not specify subnet IDs, your EC2 instances are created in a default subnet that is selected by Amazon EC2. If you specify subnet IDs, the VPC must have \"Auto Assign Public IP\" enabled.

For more information about supported Amazon EC2 platforms, see Supported Platforms.

" + "documentation":"

The IDs of subnets in which to launch the server EC2 instance.

Amazon EC2-Classic customers: This field is required. All servers must run within a VPC. The VPC must have \"Auto Assign Public IP\" enabled.

EC2-VPC customers: This field is optional. If you do not specify subnet IDs, your EC2 instances are created in a default subnet that is selected by Amazon EC2. If you specify subnet IDs, the VPC must have \"Auto Assign Public IP\" enabled.

For more information about supported Amazon EC2 platforms, see Supported Platforms.

" }, "BackupId":{ "shape":"BackupId", - "documentation":"

If you specify this field, AWS OpsWorks for Chef Automate creates the server by using the backup represented by BackupId.

" + "documentation":"

If you specify this field, AWS OpsWorks CM creates the server by using the backup represented by BackupId.

" } } }, @@ -657,7 +658,10 @@ "ServerName" ], "members":{ - "NodeAssociationStatusToken":{"shape":"NodeAssociationStatusToken"}, + "NodeAssociationStatusToken":{ + "shape":"NodeAssociationStatusToken", + "documentation":"

The token returned in either the AssociateNodeResponse or the DisassociateNodeResponse.

" + }, "ServerName":{ "shape":"ServerName", "documentation":"

The name of the server from which to disassociate the node.

" @@ -670,6 +674,10 @@ "NodeAssociationStatus":{ "shape":"NodeAssociationStatus", "documentation":"

The status of the association or disassociation request.

Possible values:

" + }, + "EngineAttributes":{ + "shape":"EngineAttributes", + "documentation":"

Attributes specific to the node association. In Puppet, the attibute PUPPET_NODE_CERT contains the signed certificate (the result of the CSR).

" } } }, @@ -695,7 +703,7 @@ "members":{ "Servers":{ "shape":"Servers", - "documentation":"

Contains the response to a DescribeServers request.

" + "documentation":"

Contains the response to a DescribeServers request.

For Puppet Server: DescribeServersResponse$Servers$EngineAttributes contains PUPPET_API_CA_CERT. This is the PEM-encoded CA certificate that is used by the Puppet API over TCP port number 8140. The CA certificate is also used to sign node certificates.

" }, "NextToken":{ "shape":"String", @@ -716,11 +724,11 @@ }, "NodeName":{ "shape":"NodeName", - "documentation":"

The name of the Chef client node.

" + "documentation":"

The name of the client node.

" }, "EngineAttributes":{ "shape":"EngineAttributes", - "documentation":"

Engine attributes used for disassociating the node.

Attributes accepted in a DisassociateNode request:

" + "documentation":"

Engine attributes that are used for disassociating the node. No attributes are required for Puppet.

Attributes required in a DisassociateNode request for Chef

" } } }, @@ -819,7 +827,7 @@ "NodeAssociationStatusToken":{"type":"string"}, "NodeName":{ "type":"string", - "documentation":"

The node name that is used by chef-client for a new node. For more information, see the Chef Documentation.

", + "documentation":"

The node name that is used by chef-client or puppet-agentfor a new node. We recommend to use a unique FQDN as hostname. For more information, see the Chef or Puppet documentation.

", "pattern":"^[\\-\\p{Alnum}_:.]+$" }, "ResourceAlreadyExistsException":{ @@ -907,19 +915,19 @@ }, "Engine":{ "shape":"String", - "documentation":"

The engine type of the server. The valid value in this release is Chef.

" + "documentation":"

The engine type of the server. Valid values in this release include Chef and Puppet.

" }, "EngineModel":{ "shape":"String", - "documentation":"

The engine model of the server. The valid value in this release is Single.

" + "documentation":"

The engine model of the server. Valid values in this release include Monolithic for Puppet and Single for Chef.

" }, "EngineAttributes":{ "shape":"EngineAttributes", - "documentation":"

The response of a createServer() request returns the master credential to access the server in EngineAttributes. These credentials are not stored by AWS OpsWorks for Chef Automate; they are returned only as part of the result of createServer().

Attributes returned in a createServer response:

" + "documentation":"

The response of a createServer() request returns the master credential to access the server in EngineAttributes. These credentials are not stored by AWS OpsWorks CM; they are returned only as part of the result of createServer().

Attributes returned in a createServer response for Chef

Attributes returned in a createServer response for Puppet

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The engine version of the server. Because Chef is the engine available in this release, the valid value for EngineVersion is 12.

" + "documentation":"

The engine version of the server. For a Chef server, the valid value for EngineVersion is currently 12. For a Puppet server, the valid value is 2017.

" }, "InstanceProfileArn":{ "shape":"String", @@ -1037,6 +1045,10 @@ "ServerName":{ "shape":"ServerName", "documentation":"

The name of the server on which to run maintenance.

" + }, + "EngineAttributes":{ + "shape":"EngineAttributes", + "documentation":"

Engine attributes that are specific to the server on which you want to run maintenance.

" } } }, @@ -1131,5 +1143,5 @@ "exception":true } }, - "documentation":"AWS OpsWorks for Chef Automate

AWS OpsWorks for Chef Automate is a service that runs and manages configuration management servers.

Glossary of terms

Endpoints

AWS OpsWorks for Chef Automate supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Chef servers can only be accessed or managed within the endpoint in which they are created.

Throttling limits

All API operations allow for five requests per second with a burst of 10 requests per second.

" + "documentation":"AWS OpsWorks CM

AWS OpsWorks for configuration management (CM) is a service that runs and manages configuration management servers.

Glossary of terms

Endpoints

AWS OpsWorks CM supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Your servers can only be accessed or managed within the endpoint in which they are created.

Throttling limits

All API operations allow for five requests per second with a burst of 10 requests per second.

" } diff --git a/services/opsworkscm/src/main/resources/codegen-resources/waiters-2.json b/services/opsworkscm/src/main/resources/codegen-resources/waiters-2.json new file mode 100644 index 000000000000..f37dd040b81e --- /dev/null +++ b/services/opsworkscm/src/main/resources/codegen-resources/waiters-2.json @@ -0,0 +1,25 @@ +{ + "version": 2, + "waiters": { + "NodeAssociated": { + "delay": 15, + "maxAttempts": 15, + "operation": "DescribeNodeAssociationStatus", + "description": "Wait until node is associated or disassociated.", + "acceptors": [ + { + "expected": "SUCCESS", + "state": "success", + "matcher": "path", + "argument": "NodeAssociationStatus" + }, + { + "expected": "FAILED", + "state": "failure", + "matcher": "path", + "argument": "NodeAssociationStatus" + } + ] + } + } +} diff --git a/services/organizations/src/main/resources/codegen-resources/service-2.json b/services/organizations/src/main/resources/codegen-resources/service-2.json index 7f3ffe6cdfc6..8d36be89fc18 100644 --- a/services/organizations/src/main/resources/codegen-resources/service-2.json +++ b/services/organizations/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"Organizations", "serviceFullName":"AWS Organizations", + "serviceId":"Organizations", "signatureVersion":"v4", "targetPrefix":"AWSOrganizationsV20161128", "timestampFormat":"unixTimestamp", @@ -31,9 +32,10 @@ {"shape":"InvalidInputException"}, {"shape":"ConcurrentModificationException"}, {"shape":"ServiceException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"AccessDeniedForDependencyException"} ], - "documentation":"

Sends a response to the originator of a handshake agreeing to the action proposed by the handshake request.

This operation can be called only by the following principals when they also have the relevant IAM permissions:

After you accept a handshake, it continues to appear in the results of relevant APIs for only 30 days. After that it is deleted.

" + "documentation":"

Sends a response to the originator of a handshake agreeing to the action proposed by the handshake request.

This operation can be called only by the following principals when they also have the relevant IAM permissions:

After you accept a handshake, it continues to appear in the results of relevant APIs for only 30 days. After that it is deleted.

" }, "AttachPolicy":{ "name":"AttachPolicy", @@ -95,7 +97,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Creates an AWS account that is automatically a member of the organization whose credentials made the request. This is an asynchronous request that AWS performs in the background. If you want to check the status of the request later, you need the OperationId response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.

AWS Organizations preconfigures the new member account with a role (named OrganizationAccountAccessRole by default) that grants administrator permissions to the new account. Principals in the master account can assume the role. AWS Organizations clones the company name and address information for the new account from the organization's master account.

For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.

You cannot remove accounts that are created with this operation from an organization. That also means that you cannot delete an organization that contains an account that is created with this operation.

When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable this, then only the account root user can access billing information. For information about how to disable this for an account, see Granting Access to Your Billing Information and Tools.

This operation can be called only from the organization's master account.

" + "documentation":"

Creates an AWS account that is automatically a member of the organization whose credentials made the request. This is an asynchronous request that AWS performs in the background. If you want to check the status of the request later, you need the OperationId response element from this operation to provide as a parameter to the DescribeCreateAccountStatus operation.

The user who calls the API for an invitation to join must have the organizations:CreateAccount permission. If you enabled all features in the organization, then the user must also have the iam:CreateServiceLinkedRole permission so that Organizations can create the required service-linked role named OrgsServiceLinkedRoleName. For more information, see AWS Organizations and Service-Linked Roles in the AWS Organizations User Guide.

The user in the master account who calls this API must also have the iam:CreateRole permission because AWS Organizations preconfigures the new member account with a role (named OrganizationAccountAccessRole by default) that grants users in the master account administrator permissions in the new member account. Principals in the master account can assume the role. AWS Organizations clones the company name and address information for the new account from the organization's master account.

This operation can be called only from the organization's master account.

For more information about creating accounts, see Creating an AWS Account in Your Organization in the AWS Organizations User Guide.

When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required for the account to operate as a standalone account, such as a payment method and signing the End User Licence Agreement (EULA) is not automatically collected. If you must remove an account from your organization later, you can do so only after you provide the missing information. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.

When you create a member account with this operation, you can choose whether to create the account with the IAM User and Role Access to Billing Information switch enabled. If you enable it, IAM users and roles that have appropriate permissions can view billing information for the account. If you disable this, then only the account root user can access billing information. For information about how to disable this for an account, see Granting Access to Your Billing Information and Tools.

This operation can be called only from the organization's master account.

If you get an exception that indicates that you exceeded your account limits for the organization or that you can\"t add an account because your organization is still initializing, please contact AWS Customer Support.

" }, "CreateOrganization":{ "name":"CreateOrganization", @@ -112,7 +114,8 @@ {"shape":"ConstraintViolationException"}, {"shape":"InvalidInputException"}, {"shape":"ServiceException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"AccessDeniedForDependencyException"} ], "documentation":"

Creates an AWS organization. The account whose user is calling the CreateOrganization operation automatically becomes the master account of the new organization.

This operation must be called using credentials from the account that is to become the new organization's master account. The principal must also have the relevant IAM permissions.

By default (or if you set the FeatureSet parameter to ALL), the new organization is created with all features enabled and service control policies automatically enabled in the root. If you instead choose to create the organization supporting only the consolidated billing features by setting the FeatureSet parameter to CONSOLIDATED_BILLING\", then no policy types are enabled by default and you cannot use organization policies.

" }, @@ -194,7 +197,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Deletes the organization. You can delete an organization only by using credentials from the master account. The organization must be empty of member accounts, OUs, and policies.

If you create any accounts using Organizations operations or the Organizations console, you can't remove those accounts from the organization, which means that you can't delete the organization.

" + "documentation":"

Deletes the organization. You can delete an organization only by using credentials from the master account. The organization must be empty of member accounts, OUs, and policies.

" }, "DeleteOrganizationalUnit":{ "name":"DeleteOrganizationalUnit", @@ -361,6 +364,24 @@ ], "documentation":"

Detaches a policy from a target root, organizational unit, or account. If the policy being detached is a service control policy (SCP), the changes to permissions for IAM users and roles in affected accounts are immediate.

Note: Every root, OU, and account must have at least one SCP attached. If you want to replace the default FullAWSAccess policy with one that limits the permissions that can be delegated, then you must attach the replacement policy before you can remove the default one. This is the authorization strategy of whitelisting. If you instead attach a second SCP and leave the FullAWSAccess SCP still attached, and specify \"Effect\": \"Deny\" in the second SCP to override the \"Effect\": \"Allow\" in the FullAWSAccess policy (or any other attached SCP), then you are using the authorization strategy of blacklisting.

This operation can be called only from the organization's master account.

" }, + "DisableAWSServiceAccess":{ + "name":"DisableAWSServiceAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisableAWSServiceAccessRequest"}, + "errors":[ + {"shape":"AccessDeniedException"}, + {"shape":"AWSOrganizationsNotInUseException"}, + {"shape":"ConcurrentModificationException"}, + {"shape":"ConstraintViolationException"}, + {"shape":"InvalidInputException"}, + {"shape":"ServiceException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Disables the integration of an AWS service (the service that is specified by ServicePrincipal) with AWS Organizations. When you disable integration, the specified service no longer can create a service-linked role in new accounts in your organization. This means the service can't perform operations on your behalf on any new accounts in your organization. The service can still perform operations in older accounts until the service completes its clean-up from AWS Organizations.

We recommend that you disable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the other service is aware that it can clean up any resources that are required only for the integration. How the service cleans up its resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service.

After you perform the DisableAWSServiceAccess operation, the specified service can no longer perform operations in your organization's accounts unless the operations are explicitly permitted by the IAM policies that are attached to your roles.

For more information about integrating other services with AWS Organizations, including the list of services that work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.

This operation can be called only from the organization's master account.

" + }, "DisablePolicyType":{ "name":"DisablePolicyType", "http":{ @@ -380,7 +401,25 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Disables an organizational control policy type in a root. A poicy of a certain type can be attached to entities in a root only if that type is enabled in the root. After you perform this operation, you no longer can attach policies of the specified type to that root or to any OU or account in that root. You can undo this by using the EnablePolicyType operation.

This operation can be called only from the organization's master account.

" + "documentation":"

Disables an organizational control policy type in a root. A policy of a certain type can be attached to entities in a root only if that type is enabled in the root. After you perform this operation, you no longer can attach policies of the specified type to that root or to any OU or account in that root. You can undo this by using the EnablePolicyType operation.

This operation can be called only from the organization's master account.

" + }, + "EnableAWSServiceAccess":{ + "name":"EnableAWSServiceAccess", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"EnableAWSServiceAccessRequest"}, + "errors":[ + {"shape":"AccessDeniedException"}, + {"shape":"AWSOrganizationsNotInUseException"}, + {"shape":"ConcurrentModificationException"}, + {"shape":"ConstraintViolationException"}, + {"shape":"InvalidInputException"}, + {"shape":"ServiceException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Enables the integration of an AWS service (the service that is specified by ServicePrincipal) with AWS Organizations. When you enable integration, you allow the specified service to create a service-linked role in all the accounts in your organization. This allows the service to perform operations on your behalf in your organization and its accounts.

We recommend that you enable integration between AWS Organizations and the specified AWS service by using the console or commands that are provided by the specified service. Doing so ensures that the service is aware that it can create the resources that are required for the integration. How the service creates those resources in the organization's accounts depends on that service. For more information, see the documentation for the other AWS service.

For more information about enabling services to integrate with AWS Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.

This operation can be called only from the organization's master account and only if the organization has enabled all features.

" }, "EnableAllFeatures":{ "name":"EnableAllFeatures", @@ -442,7 +481,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Sends an invitation to another account to join your organization as a member account. Organizations sends email on your behalf to the email address that is associated with the other account's owner. The invitation is implemented as a Handshake whose details are in the response.

You can invite AWS accounts only from the same reseller as the master account. For example, if your organization's master account was created by Amazon Internet Services Pvt. Ltd (AISPL), an AWS reseller in India, then you can only invite other AISPL accounts to your organization. You can't combine accounts from AISPL and AWS. For more information, see Consolidated Billing in India.

This operation can be called only from the organization's master account.

" + "documentation":"

Sends an invitation to another account to join your organization as a member account. Organizations sends email on your behalf to the email address that is associated with the other account's owner. The invitation is implemented as a Handshake whose details are in the response.

You can invite AWS accounts only from the same seller as the master account. For example, if your organization's master account was created by Amazon Internet Services Pvt. Ltd (AISPL), an AWS seller in India, then you can only invite other AISPL accounts to your organization. You can't combine accounts from AISPL and AWS, or any other AWS seller. For more information, see Consolidated Billing in India.

This operation can be called only from the organization's master account.

If you get an exception that indicates that you exceeded your account limits for the organization or that you can\"t add an account because your organization is still initializing, please contact AWS Customer Support.

" }, "LeaveOrganization":{ "name":"LeaveOrganization", @@ -461,7 +500,25 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the master account, use RemoveAccountFromOrganization instead.

This operation can be called only from a member account in the organization.

  • The master account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do, including preventing them from successfully calling LeaveOrganization and leaving the organization.

  • If you created the account using the AWS Organizations console, the Organizations API, or the Organizations CLI commands, then you cannot remove the account.

  • You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.

" + "documentation":"

Removes a member account from its parent organization. This version of the operation is performed by the account that wants to leave. To remove a member account as a user in the master account, use RemoveAccountFromOrganization instead.

This operation can be called only from a member account in the organization.

  • The master account in an organization with all features enabled can set service control policies (SCPs) that can restrict what administrators of member accounts can do, including preventing them from successfully calling LeaveOrganization and leaving the organization.

  • You can leave an organization as a member account only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For each account that you want to make standalone, you must accept the End User License Agreement (EULA), choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account is not attached to an organization. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.

  • You can leave an organization only after you enable IAM user access to billing in your account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.

" + }, + "ListAWSServiceAccessForOrganization":{ + "name":"ListAWSServiceAccessForOrganization", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListAWSServiceAccessForOrganizationRequest"}, + "output":{"shape":"ListAWSServiceAccessForOrganizationResponse"}, + "errors":[ + {"shape":"AccessDeniedException"}, + {"shape":"AWSOrganizationsNotInUseException"}, + {"shape":"ConstraintViolationException"}, + {"shape":"InvalidInputException"}, + {"shape":"ServiceException"}, + {"shape":"TooManyRequestsException"} + ], + "documentation":"

Returns a list of the AWS services that you enabled to integrate with your organization. After a service on this list creates the resources that it requires for the integration, it can perform operations on your organization and its accounts.

For more information about integrating other services with AWS Organizations, including the list of services that currently work with Organizations, see Integrating AWS Organizations with Other AWS Services in the AWS Organizations User Guide.

This operation can be called only from the organization's master account.

" }, "ListAccounts":{ "name":"ListAccounts", @@ -496,7 +553,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Lists the accounts in an organization that are contained by the specified target root or organizational unit (OU). If you specify the root, you get a list of all the accounts that are not in any OU. If you specify an OU, you get a list of all the accounts in only that OU, and not in any child OUs. To get a list of all accounts in the organization, use the ListAccounts operation.

" + "documentation":"

Lists the accounts in an organization that are contained by the specified target root or organizational unit (OU). If you specify the root, you get a list of all the accounts that are not in any OU. If you specify an OU, you get a list of all the accounts in only that OU, and not in any child OUs. To get a list of all accounts in the organization, use the ListAccounts operation.

This operation can be called only from the organization's master account.

" }, "ListChildren":{ "name":"ListChildren", @@ -514,7 +571,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Lists all of the OUs or accounts that are contained in the specified parent OU or root. This operation, along with ListParents enables you to traverse the tree structure that makes up this root.

" + "documentation":"

Lists all of the OUs or accounts that are contained in the specified parent OU or root. This operation, along with ListParents enables you to traverse the tree structure that makes up this root.

This operation can be called only from the organization's master account.

" }, "ListCreateAccountStatus":{ "name":"ListCreateAccountStatus", @@ -713,7 +770,7 @@ {"shape":"ServiceException"}, {"shape":"TooManyRequestsException"} ], - "documentation":"

Removes the specified account from the organization.

The removed account becomes a stand-alone account that is not a member of any organization. It is no longer subject to any policies and is responsible for its own bill payments. The organization's master account is no longer charged for any expenses accrued by the member account after it is removed from the organization.

This operation can be called only from the organization's master account. Member accounts can remove themselves with LeaveOrganization instead.

  • You can remove only accounts that were created outside your organization and invited to join. If you created the account using the AWS Organizations console, the Organizations API, or the Organizations CLI commands, then you cannot remove the account.

  • You can remove a member account only after you enable IAM user access to billing in the member account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.

" + "documentation":"

Removes the specified account from the organization.

The removed account becomes a stand-alone account that is not a member of any organization. It is no longer subject to any policies and is responsible for its own bill payments. The organization's master account is no longer charged for any expenses accrued by the member account after it is removed from the organization.

This operation can be called only from the organization's master account. Member accounts can remove themselves with LeaveOrganization instead.

  • You can remove an account from your organization only if the account is configured with the information required to operate as a standalone account. When you create an account in an organization using the AWS Organizations console, API, or CLI commands, the information required of standalone accounts is not automatically collected. For an account that you want to make standalone, you must accept the End User License Agreement (EULA), choose a support plan, provide and verify the required contact information, and provide a current payment method. AWS uses the payment method to charge for any billable (not free tier) AWS activity that occurs while the account is not attached to an organization. To remove an account that does not yet have this information, you must sign in as the member account and follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.

  • You can remove a member account only after you enable IAM user access to billing in the member account. For more information, see Activating Access to the Billing and Cost Management Console in the AWS Billing and Cost Management User Guide.

" }, "UpdateOrganizationalUnit":{ "name":"UpdateOrganizationalUnit", @@ -794,6 +851,19 @@ "documentation":"

You don't have permissions to perform the requested operation. The user or role that is making the request must have at least one IAM permissions policy attached that grants the required permissions. For more information, see Access Management in the IAM User Guide.

", "exception":true }, + "AccessDeniedForDependencyException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ExceptionMessage"}, + "Reason":{"shape":"AccessDeniedForDependencyExceptionReason"} + }, + "documentation":"

The operation you attempted requires you to have the iam:CreateServiceLinkedRole so that Organizations can create the required service-linked role. You do not have that permission.

", + "exception":true + }, + "AccessDeniedForDependencyExceptionReason":{ + "type":"string", + "enum":["ACCESS_DENIED_DURING_CREATE_SERVICE_LINKED_ROLE"] + }, "Account":{ "type":"structure", "members":{ @@ -873,7 +943,8 @@ "enum":[ "INVITE", "ENABLE_ALL_FEATURES", - "APPROVE_ALL_FEATURES" + "APPROVE_ALL_FEATURES", + "ADD_ORGANIZATIONS_SERVICE_LINKED_ROLE" ] }, "AlreadyInOrganizationException":{ @@ -972,7 +1043,7 @@ "Message":{"shape":"ExceptionMessage"}, "Reason":{"shape":"ConstraintViolationExceptionReason"} }, - "documentation":"

Performing this operation violates a minimum or maximum value limit. For example, attempting to removing the last SCP from an OU or root, inviting or creating too many accounts to the organization, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit:

", + "documentation":"

Performing this operation violates a minimum or maximum value limit. For example, attempting to removing the last SCP from an OU or root, inviting or creating too many accounts to the organization, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit:

Some of the reasons in the following list might not be applicable to this specific API or operation:

", "exception":true }, "ConstraintViolationExceptionReason":{ @@ -991,7 +1062,9 @@ "MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED", "MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED", "ACCOUNT_CREATION_RATE_LIMIT_EXCEEDED", - "MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE" + "MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE", + "MASTER_ACCOUNT_MISSING_CONTACT_INFO", + "ORGANIZATION_NOT_IN_ALL_FEATURES_MODE" ] }, "CreateAccountFailureReason":{ @@ -1001,6 +1074,7 @@ "EMAIL_ALREADY_EXISTS", "INVALID_ADDRESS", "INVALID_EMAIL", + "CONCURRENT_ACCOUNT_MODIFICATION", "INTERNAL_FAILURE" ] }, @@ -1013,7 +1087,7 @@ "members":{ "Email":{ "shape":"Email", - "documentation":"

The email address of the owner to assign to the new member account. This email address must not already be associated with another AWS account.

" + "documentation":"

The email address of the owner to assign to the new member account. This email address must not already be associated with another AWS account. You must use a valid email address to complete account creation. You cannot access the root user of the account or remove an account that was created with an invalid email address.

" }, "AccountName":{ "shape":"AccountName", @@ -1348,6 +1422,16 @@ } } }, + "DisableAWSServiceAccessRequest":{ + "type":"structure", + "required":["ServicePrincipal"], + "members":{ + "ServicePrincipal":{ + "shape":"ServicePrincipal", + "documentation":"

The service principal name of the AWS service for which you want to disable integration with your organization. This is typically in the form of a URL, such as service-abbreviation.amazonaws.com.

" + } + } + }, "DisablePolicyTypeRequest":{ "type":"structure", "required":[ @@ -1357,7 +1441,7 @@ "members":{ "RootId":{ "shape":"RootId", - "documentation":"

The unique identifier (ID) of the root in which you want to disable a policy type. You can get the ID from the ListPolicies operation.

The regex pattern for a root ID string requires \"r-\" followed by from 4 to 32 lower-case letters or digits.

" + "documentation":"

The unique identifier (ID) of the root in which you want to disable a policy type. You can get the ID from the ListRoots operation.

The regex pattern for a root ID string requires \"r-\" followed by from 4 to 32 lower-case letters or digits.

" }, "PolicyType":{ "shape":"PolicyType", @@ -1421,6 +1505,16 @@ "pattern":"[^\\s@]+@[^\\s@]+\\.[^\\s@]+", "sensitive":true }, + "EnableAWSServiceAccessRequest":{ + "type":"structure", + "required":["ServicePrincipal"], + "members":{ + "ServicePrincipal":{ + "shape":"ServicePrincipal", + "documentation":"

The service principal name of the AWS service for which you want to enable integration with your organization. This is typically in the form of a URL, such as service-abbreviation.amazonaws.com.

" + } + } + }, "EnableAllFeaturesRequest":{ "type":"structure", "members":{ @@ -1461,6 +1555,24 @@ } } }, + "EnabledServicePrincipal":{ + "type":"structure", + "members":{ + "ServicePrincipal":{ + "shape":"ServicePrincipal", + "documentation":"

The name of the service principal. This is typically in the form of a URL, such as: servicename.amazonaws.com.

" + }, + "DateEnabled":{ + "shape":"Timestamp", + "documentation":"

The date that the service principal was enabled for integration with AWS Organizations.

" + } + }, + "documentation":"

A structure that contains details of a service principal that is enabled to integrate with AWS Organizations.

" + }, + "EnabledServicePrincipals":{ + "type":"list", + "member":{"shape":"EnabledServicePrincipal"} + }, "ExceptionMessage":{"type":"string"}, "ExceptionType":{"type":"string"}, "FinalizingOrganizationException":{ @@ -1504,7 +1616,7 @@ }, "Action":{ "shape":"ActionType", - "documentation":"

The type of handshake, indicating what action occurs when the recipient accepts the handshake.

" + "documentation":"

The type of handshake, indicating what action occurs when the recipient accepts the handshake. The following handshake types are supported:

" }, "Resources":{ "shape":"HandshakeResources", @@ -1531,7 +1643,7 @@ "Message":{"shape":"ExceptionMessage"}, "Reason":{"shape":"HandshakeConstraintViolationExceptionReason"} }, - "documentation":"

The requested operation would violate the constraint identified in the reason code.

", + "documentation":"

The requested operation would violate the constraint identified in the reason code.

Some of the reasons in the following list might not be applicable to this specific API or operation:

", "exception":true }, "HandshakeConstraintViolationExceptionReason":{ @@ -1584,6 +1696,10 @@ }, "HandshakeParty":{ "type":"structure", + "required":[ + "Id", + "Type" + ], "members":{ "Id":{ "shape":"HandshakePartyId", @@ -1685,7 +1801,7 @@ "Message":{"shape":"ExceptionMessage"}, "Reason":{"shape":"InvalidInputExceptionReason"} }, - "documentation":"

The requested operation failed because you provided invalid values for one or more of the request parameters. This exception includes a reason that contains additional information about the violated limit:

", + "documentation":"

The requested operation failed because you provided invalid values for one or more of the request parameters. This exception includes a reason that contains additional information about the violated limit:

Some of the reasons in the following list might not be applicable to this specific API or operation:

", "exception":true }, "InvalidInputExceptionReason":{ @@ -1707,7 +1823,8 @@ "INVALID_NEXT_TOKEN", "MAX_LIMIT_EXCEEDED_FILTER", "MOVING_ACCOUNT_BETWEEN_DIFFERENT_ROOTS", - "INVALID_FULL_NAME_TARGET" + "INVALID_FULL_NAME_TARGET", + "UNRECOGNIZED_SERVICE_PRINCIPAL" ] }, "InviteAccountToOrganizationRequest":{ @@ -1716,7 +1833,7 @@ "members":{ "Target":{ "shape":"HandshakeParty", - "documentation":"

The identifier (ID) of the AWS account that you want to invite to join your organization. This is a JSON object that contains the following elements:

{ \"Type\": \"ACCOUNT\", \"Id\": \"< account id number >\" }

If you use the AWS CLI, you can submit this as a single string, similar to the following example:

--target id=123456789012,type=ACCOUNT

If you specify \"Type\": \"ACCOUNT\", then you must provide the AWS account ID number as the Id. If you specify \"Type\": \"EMAIL\", then you must specify the email address that is associated with the account.

--target id=bill@example.com,type=EMAIL

" + "documentation":"

The identifier (ID) of the AWS account that you want to invite to join your organization. This is a JSON object that contains the following elements:

{ \"Type\": \"ACCOUNT\", \"Id\": \"< account id number >\" }

If you use the AWS CLI, you can submit this as a single string, similar to the following example:

--target Id=123456789012,Type=ACCOUNT

If you specify \"Type\": \"ACCOUNT\", then you must provide the AWS account ID number as the Id. If you specify \"Type\": \"EMAIL\", then you must specify the email address that is associated with the account.

--target Id=bill@example.com,Type=EMAIL

" }, "Notes":{ "shape":"HandshakeNotes", @@ -1733,6 +1850,32 @@ } } }, + "ListAWSServiceAccessForOrganizationRequest":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"

Use this parameter if you receive a NextToken response in a previous request that indicates that there is more output available. Set it to the value of the previous call's NextToken response to indicate where the output should continue from.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

(Optional) Use this to limit the number of results you want included in the response. If you do not include this parameter, it defaults to a value that is specific to the operation. If additional items exist beyond the maximum you specify, the NextToken response element is present and has a value (is not null). Include that value as the NextToken request parameter in the next call to the operation to get the next part of the results. Note that Organizations might return fewer results than the maximum even when there are more results available. You should check NextToken after every operation to ensure that you receive all of the results.

" + } + } + }, + "ListAWSServiceAccessForOrganizationResponse":{ + "type":"structure", + "members":{ + "EnabledServicePrincipals":{ + "shape":"EnabledServicePrincipals", + "documentation":"

A list of the service principals for the services that are enabled to integrate with your organization. Each principal is a structure that includes the name and the date that it was enabled for integration with AWS Organizations.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

If present, this value indicates that there is more output available than is included in the current response. Use this value in the NextToken request parameter in a subsequent call to the operation to get the next part of the output. You should repeat this until the NextToken response element comes back as null.

" + } + } + }, "ListAccountsForParentRequest":{ "type":"structure", "required":["ParentId"], @@ -2543,6 +2686,11 @@ "documentation":"

AWS Organizations can't complete your request because of an internal service error. Try again later.

", "exception":true }, + "ServicePrincipal":{ + "type":"string", + "max":1000, + "min":1 + }, "SourceParentNotFoundException":{ "type":"structure", "members":{ @@ -2637,5 +2785,5 @@ } } }, - "documentation":"AWS Organizations API Reference

AWS Organizations is a web service that enables you to consolidate your multiple AWS accounts into an organization and centrally manage your accounts and their resources.

This guide provides descriptions of the Organizations API. For more information about using this service, see the AWS Organizations User Guide.

API Version

This version of the Organizations API Reference documents the Organizations API version 2016-11-28.

As an alternative to using the API directly, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to AWS Organizations. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.

We recommend that you use the AWS SDKs to make programmatic API calls to Organizations. However, you also can use the Organizations Query API to make direct calls to the Organizations web service. To learn more about the Organizations Query API, see Making Query Requests in the AWS Organizations User Guide. Organizations supports GET and POST requests for all actions. That is, the API does not require you to use GET for some actions and POST for others. However, GET requests are subject to the limitation size of a URL. Therefore, for operations that require larger sizes, use a POST request.

Signing Requests

When you send HTTP requests to AWS, you must sign the requests so that AWS can identify who sent them. You sign requests with your AWS access key, which consists of an access key ID and a secret access key. We strongly recommend that you do not create an access key for your root account. Anyone who has the access key for your root account has unrestricted access to all the resources in your account. Instead, create an access key for an IAM user account that has administrative privileges. As another option, use AWS Security Token Service to generate temporary security credentials, and use those credentials to sign requests.

To sign requests, we recommend that you use Signature Version 4. If you have an existing application that uses Signature Version 2, you do not have to update it to use Signature Version 4. However, some operations now require Signature Version 4. The documentation for operations that require version 4 indicate this requirement.

When you use the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs to make requests to AWS, these tools automatically sign the requests for you with the access key that you specify when you configure the tools.

In this release, each organization can have only one root. In a future release, a single organization will support multiple roots.

Support and Feedback for AWS Organizations

We welcome your feedback. Send your comments to feedback-awsorganizations@amazon.com or post your feedback and questions in our private AWS Organizations support forum. If you don't have access to the forum, send a request for access to the email address, along with your forum user ID. For more information about the AWS support forums, see Forums Help.

Endpoint to Call When Using the CLI or the AWS API

For the current release of Organizations, you must specify the us-east-1 region for all AWS API and CLI calls. You can do this in the CLI by using these parameters and commands:

For the various SDKs used to call the APIs, see the documentation for the SDK of interest to learn how to direct the requests to a specific endpoint. For more information, see Regions and Endpoints in the AWS General Reference.

How examples are presented

The JSON returned by the AWS Organizations service as response to your requests is returned as a single long string without line breaks or formatting whitespace. Both line breaks and whitespace are included in the examples in this guide to improve readability. When example input parameters also would result in long strings that would extend beyond the screen, we insert line breaks to enhance readability. You should always submit the input as a single JSON text string.

Recording API Requests

AWS Organizations supports AWS CloudTrail, a service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information collected by AWS CloudTrail, you can determine which requests were successfully made to Organizations, who made the request, when it was made, and so on. For more about AWS Organizations and its support for AWS CloudTrail, see Logging AWS Organizations Events with AWS CloudTrail in the AWS Organizations User Guide. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.

" + "documentation":"AWS Organizations API Reference

AWS Organizations is a web service that enables you to consolidate your multiple AWS accounts into an organization and centrally manage your accounts and their resources.

This guide provides descriptions of the Organizations API. For more information about using this service, see the AWS Organizations User Guide.

API Version

This version of the Organizations API Reference documents the Organizations API version 2016-11-28.

As an alternative to using the API directly, you can use one of the AWS SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to AWS Organizations. For example, the SDKs take care of cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.

We recommend that you use the AWS SDKs to make programmatic API calls to Organizations. However, you also can use the Organizations Query API to make direct calls to the Organizations web service. To learn more about the Organizations Query API, see Making Query Requests in the AWS Organizations User Guide. Organizations supports GET and POST requests for all actions. That is, the API does not require you to use GET for some actions and POST for others. However, GET requests are subject to the limitation size of a URL. Therefore, for operations that require larger sizes, use a POST request.

Signing Requests

When you send HTTP requests to AWS, you must sign the requests so that AWS can identify who sent them. You sign requests with your AWS access key, which consists of an access key ID and a secret access key. We strongly recommend that you do not create an access key for your root account. Anyone who has the access key for your root account has unrestricted access to all the resources in your account. Instead, create an access key for an IAM user account that has administrative privileges. As another option, use AWS Security Token Service to generate temporary security credentials, and use those credentials to sign requests.

To sign requests, we recommend that you use Signature Version 4. If you have an existing application that uses Signature Version 2, you do not have to update it to use Signature Version 4. However, some operations now require Signature Version 4. The documentation for operations that require version 4 indicate this requirement.

When you use the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs to make requests to AWS, these tools automatically sign the requests for you with the access key that you specify when you configure the tools.

In this release, each organization can have only one root. In a future release, a single organization will support multiple roots.

Support and Feedback for AWS Organizations

We welcome your feedback. Send your comments to feedback-awsorganizations@amazon.com or post your feedback and questions in the AWS Organizations support forum. For more information about the AWS support forums, see Forums Help.

Endpoint to Call When Using the CLI or the AWS API

For the current release of Organizations, you must specify the us-east-1 region for all AWS API and CLI calls. You can do this in the CLI by using these parameters and commands:

For the various SDKs used to call the APIs, see the documentation for the SDK of interest to learn how to direct the requests to a specific endpoint. For more information, see Regions and Endpoints in the AWS General Reference.

How examples are presented

The JSON returned by the AWS Organizations service as response to your requests is returned as a single long string without line breaks or formatting whitespace. Both line breaks and whitespace are included in the examples in this guide to improve readability. When example input parameters also would result in long strings that would extend beyond the screen, we insert line breaks to enhance readability. You should always submit the input as a single JSON text string.

Recording API Requests

AWS Organizations supports AWS CloudTrail, a service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information collected by AWS CloudTrail, you can determine which requests were successfully made to Organizations, who made the request, when it was made, and so on. For more about AWS Organizations and its support for AWS CloudTrail, see Logging AWS Organizations Events with AWS CloudTrail in the AWS Organizations User Guide. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.

" } diff --git a/services/pinpoint/src/main/resources/codegen-resources/api-2.json b/services/pinpoint/src/main/resources/codegen-resources/api-2.json index 18c2d07d8278..0976eea66bc4 100644 --- a/services/pinpoint/src/main/resources/codegen-resources/api-2.json +++ b/services/pinpoint/src/main/resources/codegen-resources/api-2.json @@ -9,6 +9,33 @@ "jsonVersion" : "1.1" }, "operations" : { + "CreateApp" : { + "name" : "CreateApp", + "http" : { + "method" : "POST", + "requestUri" : "/v1/apps", + "responseCode" : 201 + }, + "input" : { + "shape" : "CreateAppRequest" + }, + "output" : { + "shape" : "CreateAppResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ] + }, "CreateCampaign" : { "name" : "CreateCampaign", "http" : { @@ -144,6 +171,33 @@ "shape" : "TooManyRequestsException" } ] }, + "DeleteApp" : { + "name" : "DeleteApp", + "http" : { + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}", + "responseCode" : 200 + }, + "input" : { + "shape" : "DeleteAppRequest" + }, + "output" : { + "shape" : "DeleteAppResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ] + }, "DeleteCampaign" : { "name" : "DeleteCampaign", "http" : { @@ -360,6 +414,33 @@ "shape" : "TooManyRequestsException" } ] }, + "GetApp" : { + "name" : "GetApp", + "http" : { + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}", + "responseCode" : 200 + }, + "input" : { + "shape" : "GetAppRequest" + }, + "output" : { + "shape" : "GetAppResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ] + }, "GetApplicationSettings" : { "name" : "GetApplicationSettings", "http" : { @@ -387,6 +468,33 @@ "shape" : "TooManyRequestsException" } ] }, + "GetApps" : { + "name" : "GetApps", + "http" : { + "method" : "GET", + "requestUri" : "/v1/apps", + "responseCode" : 200 + }, + "input" : { + "shape" : "GetAppsRequest" + }, + "output" : { + "shape" : "GetAppsResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ] + }, "GetCampaign" : { "name" : "GetCampaign", "http" : { @@ -1387,6 +1495,17 @@ } } }, + "ApplicationResponse" : { + "type" : "structure", + "members" : { + "Id" : { + "shape" : "__string" + }, + "Name" : { + "shape" : "__string" + } + } + }, "ApplicationSettingsResource" : { "type" : "structure", "members" : { @@ -1404,6 +1523,17 @@ } } }, + "ApplicationsResponse" : { + "type" : "structure", + "members" : { + "Item" : { + "shape" : "ListOfApplicationResponse" + }, + "NextToken" : { + "shape" : "__string" + } + } + }, "AttributeDimension" : { "type" : "structure", "members" : { @@ -1440,6 +1570,9 @@ "Body" : { "shape" : "__string" }, + "FromAddress" : { + "shape" : "__string" + }, "HtmlBody" : { "shape" : "__string" }, @@ -1562,6 +1695,34 @@ "type" : "string", "enum" : [ "GCM", "APNS", "APNS_SANDBOX", "ADM", "SMS", "EMAIL" ] }, + "CreateAppRequest" : { + "type" : "structure", + "members" : { + "CreateApplicationRequest" : { + "shape" : "CreateApplicationRequest" + } + }, + "required" : [ "CreateApplicationRequest" ], + "payload" : "CreateApplicationRequest" + }, + "CreateAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, + "CreateApplicationRequest" : { + "type" : "structure", + "members" : { + "Name" : { + "shape" : "__string" + } + } + }, "CreateCampaignRequest" : { "type" : "structure", "members" : { @@ -1716,6 +1877,27 @@ "required" : [ "APNSSandboxChannelResponse" ], "payload" : "APNSSandboxChannelResponse" }, + "DeleteAppRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, "DeleteCampaignRequest" : { "type" : "structure", "members" : { @@ -2329,6 +2511,27 @@ "required" : [ "APNSSandboxChannelResponse" ], "payload" : "APNSSandboxChannelResponse" }, + "GetAppRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "GetAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, "GetApplicationSettingsRequest" : { "type" : "structure", "members" : { @@ -2350,6 +2553,31 @@ "required" : [ "ApplicationSettingsResource" ], "payload" : "ApplicationSettingsResource" }, + "GetAppsRequest" : { + "type" : "structure", + "members" : { + "PageSize" : { + "shape" : "__string", + "location" : "querystring", + "locationName" : "page-size" + }, + "Token" : { + "shape" : "__string", + "location" : "querystring", + "locationName" : "token" + } + } + }, + "GetAppsResponse" : { + "type" : "structure", + "members" : { + "ApplicationsResponse" : { + "shape" : "ApplicationsResponse" + } + }, + "required" : [ "ApplicationsResponse" ], + "payload" : "ApplicationsResponse" + }, "GetCampaignActivitiesRequest" : { "type" : "structure", "members" : { @@ -2975,6 +3203,12 @@ "shape" : "ActivityResponse" } }, + "ListOfApplicationResponse" : { + "type" : "list", + "member" : { + "shape" : "ApplicationResponse" + } + }, "ListOfCampaignResponse" : { "type" : "list", "member" : { @@ -3104,6 +3338,9 @@ "MediaUrl" : { "shape" : "__string" }, + "RawContent" : { + "shape" : "__string" + }, "SilentPush" : { "shape" : "__boolean" }, @@ -3946,4 +4183,4 @@ "type" : "timestamp" } } -} \ No newline at end of file +} diff --git a/services/pinpoint/src/main/resources/codegen-resources/docs-2.json b/services/pinpoint/src/main/resources/codegen-resources/docs-2.json index 269407585a94..f2df42c2ae9e 100644 --- a/services/pinpoint/src/main/resources/codegen-resources/docs-2.json +++ b/services/pinpoint/src/main/resources/codegen-resources/docs-2.json @@ -2,11 +2,13 @@ "version" : "2.0", "service" : null, "operations" : { + "CreateApp" : "Used to create an app.", "CreateCampaign" : "Creates or updates a campaign.", "CreateImportJob" : "Creates or updates an import job.", "CreateSegment" : "Used to create or update a segment.", "DeleteApnsChannel" : "Deletes the APNs channel for an app.", "DeleteApnsSandboxChannel" : "Delete an APNS sandbox channel", + "DeleteApp" : "Deletes an app.", "DeleteCampaign" : "Deletes a campaign.", "DeleteEmailChannel" : "Delete an email channel", "DeleteEventStream" : "Deletes the event stream for an app.", @@ -15,7 +17,9 @@ "DeleteSmsChannel" : "Delete an SMS channel", "GetApnsChannel" : "Returns information about the APNs channel for an app.", "GetApnsSandboxChannel" : "Get an APNS sandbox channel", + "GetApp" : "Returns information about an app.", "GetApplicationSettings" : "Used to request the settings for an app.", + "GetApps" : "Returns information about your apps.", "GetCampaign" : "Returns information about a campaign.", "GetCampaignActivities" : "Returns information about the activity performed by a campaign.", "GetCampaignVersion" : "Returns information about a specific version of a campaign.", @@ -94,10 +98,20 @@ "MessageRequest$Addresses" : "A map of destination addresses, with the address as the key(Email address, phone number or push token) and the Address Configuration as the value." } }, + "ApplicationResponse" : { + "base" : "Application Response.", + "refs" : { + "ApplicationsResponse$Item" : "List of applications returned in this page." + } + }, "ApplicationSettingsResource" : { "base" : "Application settings.", "refs" : { } }, + "ApplicationsResponse" : { + "base" : "Get Applications Result.", + "refs" : { } + }, "AttributeDimension" : { "base" : "Custom attibute dimension", "refs" : { @@ -163,12 +177,16 @@ "ChannelType" : { "base" : null, "refs" : { - "AddressConfiguration$ChannelType" : "Type of channel of this address", - "EndpointBatchItem$ChannelType" : "The channel type.\n\nValid values: APNS, GCM", - "EndpointRequest$ChannelType" : "The channel type.\n\nValid values: APNS, GCM", - "EndpointResponse$ChannelType" : "The channel type.\n\nValid values: APNS, GCM" + "AddressConfiguration$ChannelType" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL", + "EndpointBatchItem$ChannelType" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL", + "EndpointRequest$ChannelType" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL", + "EndpointResponse$ChannelType" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL" } }, + "CreateApplicationRequest" : { + "base" : "Application Request.", + "refs" : { } + }, "DefaultMessage" : { "base" : "Default Message across push notification, email, and sms.", "refs" : { @@ -325,6 +343,10 @@ "base" : null, "refs" : { } }, + "ListOfApplicationResponse" : { + "base" : null, + "refs" : { } + }, "ListOfCampaignResponse" : { "base" : null, "refs" : { } @@ -642,7 +664,7 @@ "APNSChannelRequest$PrivateKey" : "The certificate private key.", "APNSChannelResponse$ApplicationId" : "The ID of the application to which the channel applies.", "APNSChannelResponse$CreationDate" : "When was this segment created", - "APNSChannelResponse$Id" : "Channel ID. Not used, only for backwards compatibility.", + "APNSChannelResponse$Id" : "Channel ID. Not used. Present only for backwards compatibility.", "APNSChannelResponse$LastModifiedBy" : "Who last updated this entry", "APNSChannelResponse$LastModifiedDate" : "Last date this was updated", "APNSChannelResponse$Platform" : "The platform type. Will be APNS.", @@ -661,7 +683,7 @@ "APNSSandboxChannelResponse$Id" : "Channel ID. Not used, only for backwards compatibility.", "APNSSandboxChannelResponse$LastModifiedBy" : "Who last updated this entry", "APNSSandboxChannelResponse$LastModifiedDate" : "Last date this was updated", - "APNSSandboxChannelResponse$Platform" : "The platform type. Will be APNS.", + "APNSSandboxChannelResponse$Platform" : "The platform type. Will be APNS_SANDBOX.", "ActivityResponse$ApplicationId" : "The ID of the application to which the campaign applies.", "ActivityResponse$CampaignId" : "The ID of the campaign to which the activity applies.", "ActivityResponse$End" : "The actual time the activity was marked CANCELLED or COMPLETED. Provided in ISO 8601 format.", @@ -674,9 +696,13 @@ "AddressConfiguration$BodyOverride" : "Body override. If specified will override default body.", "AddressConfiguration$RawContent" : "The Raw JSON formatted string to be used as the payload. This value overrides the message.", "AddressConfiguration$TitleOverride" : "Title override. If specified will override default title if applicable.", + "ApplicationResponse$Id" : "The unique application ID.", + "ApplicationResponse$Name" : "The display name of the application.", "ApplicationSettingsResource$ApplicationId" : "The unique ID for the application.", "ApplicationSettingsResource$LastModifiedDate" : "The date that the settings were last updated in ISO 8601 format.", + "ApplicationsResponse$NextToken" : "The string that you use in a subsequent request to get the next page of results in a paginated response.", "CampaignEmailMessage$Body" : "The email text body.", + "CampaignEmailMessage$FromAddress" : "The email address used to send the email from. Defaults to use FromAddress specified in the Email Channel.", "CampaignEmailMessage$HtmlBody" : "The email html body.", "CampaignEmailMessage$Title" : "The email title (Or subject).", "CampaignResponse$ApplicationId" : "The ID of the application to which the campaign applies.", @@ -691,6 +717,7 @@ "CampaignSmsMessage$Body" : "The SMS text body.", "CampaignSmsMessage$SenderId" : "Sender ID of sent message.", "CampaignsResponse$NextToken" : "The string that you use in a subsequent request to get the next page of results in a paginated response.", + "CreateApplicationRequest$Name" : "The display name of the application. Used in the Amazon Pinpoint console.", "DefaultMessage$Body" : "The message body of the notification, the email body or the text message.", "DefaultPushNotificationMessage$Body" : "The message body of the notification, the email body or the text message.", "DefaultPushNotificationMessage$Title" : "The message title that displays above the message on the user's device.", @@ -698,7 +725,7 @@ "EmailChannelRequest$FromAddress" : "The email address used to send emails from.", "EmailChannelRequest$Identity" : "The ARN of an identity verified with SES.", "EmailChannelRequest$RoleArn" : "The ARN of an IAM Role used to submit events to Mobile Analytics' event ingestion service", - "EmailChannelResponse$ApplicationId" : "Application id", + "EmailChannelResponse$ApplicationId" : "The unique ID of the application to which the email channel belongs.", "EmailChannelResponse$CreationDate" : "The date that the settings were last updated in ISO 8601 format.", "EmailChannelResponse$FromAddress" : "The email address used to send emails from.", "EmailChannelResponse$Id" : "Channel ID. Not used, only for backwards compatibility.", @@ -711,7 +738,7 @@ "EndpointBatchItem$EffectiveDate" : "The last time the endpoint was updated. Provided in ISO 8601 format.", "EndpointBatchItem$EndpointStatus" : "The endpoint status. Can be either ACTIVE or INACTIVE. Will be set to INACTIVE if a delivery fails. Will be set to ACTIVE if the address is updated.", "EndpointBatchItem$Id" : "The unique Id for the Endpoint in the batch.", - "EndpointBatchItem$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages.", + "EndpointBatchItem$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages.", "EndpointBatchItem$RequestId" : "The unique ID for the most recent request to update the endpoint.", "EndpointDemographic$AppVersion" : "The version of the application associated with the endpoint.", "EndpointDemographic$Locale" : "The endpoint locale in the following format: The ISO 639-1 alpha-2 code, followed by an underscore, followed by an ISO 3166-1 alpha-2 value.\n", @@ -728,7 +755,7 @@ "EndpointRequest$Address" : "The address or token of the endpoint as provided by your push provider (e.g. DeviceToken or RegistrationId).", "EndpointRequest$EffectiveDate" : "The last time the endpoint was updated. Provided in ISO 8601 format.", "EndpointRequest$EndpointStatus" : "The endpoint status. Can be either ACTIVE or INACTIVE. Will be set to INACTIVE if a delivery fails. Will be set to ACTIVE if the address is updated.", - "EndpointRequest$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages.", + "EndpointRequest$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages.", "EndpointRequest$RequestId" : "The unique ID for the most recent request to update the endpoint.", "EndpointResponse$Address" : "The address or token of the endpoint as provided by your push provider (e.g. DeviceToken or RegistrationId).", "EndpointResponse$ApplicationId" : "The ID of the application associated with the endpoint.", @@ -737,7 +764,7 @@ "EndpointResponse$EffectiveDate" : "The last time the endpoint was updated. Provided in ISO 8601 format.", "EndpointResponse$EndpointStatus" : "The endpoint status. Can be either ACTIVE or INACTIVE. Will be set to INACTIVE if a delivery fails. Will be set to ACTIVE if the address is updated.", "EndpointResponse$Id" : "The unique ID that you assigned to the endpoint. The ID should be a globally unique identifier (GUID) to ensure that it is unique compared to all other endpoints for the application.", - "EndpointResponse$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages.", + "EndpointResponse$OptOut" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages.", "EndpointResponse$RequestId" : "The unique ID for the most recent request to update the endpoint.", "EndpointUser$UserId" : "The unique ID of the user.", "EventStream$ApplicationId" : "The ID of the application from which events should be published.", @@ -750,7 +777,7 @@ "GCMChannelResponse$ApplicationId" : "The ID of the application to which the channel applies.", "GCMChannelResponse$CreationDate" : "When was this segment created", "GCMChannelResponse$Credential" : "The GCM API key from Google.", - "GCMChannelResponse$Id" : "Channel ID. Not used, only for backwards compatibility.", + "GCMChannelResponse$Id" : "Channel ID. Not used. Present only for backwards compatibility.", "GCMChannelResponse$LastModifiedBy" : "Who last updated this entry", "GCMChannelResponse$LastModifiedDate" : "Last date this was updated", "GCMChannelResponse$Platform" : "The platform type. Will be GCM", @@ -787,6 +814,7 @@ "Message$ImageUrl" : "The URL that points to an image used in the push notification.", "Message$JsonBody" : "The JSON payload used for a silent push.", "Message$MediaUrl" : "The URL that points to the media resource, for example a .mp4 or .gif file.", + "Message$RawContent" : "The Raw JSON formatted string to be used as the payload. This value overrides the message.", "Message$Title" : "The message title that displays above the message on the user's device.", "Message$Url" : "The URL to open in the user's mobile browser. Used if the value for Action is URL.", "MessageBody$Message" : "The error message returned from the API.", @@ -798,7 +826,7 @@ "QuietTime$End" : "The default end time for quiet time in ISO 8601 format.", "QuietTime$Start" : "The default start time for quiet time in ISO 8601 format.", "SMSChannelRequest$SenderId" : "Sender identifier of your messages.", - "SMSChannelResponse$ApplicationId" : "Application id", + "SMSChannelResponse$ApplicationId" : "The unique ID of the application to which the SMS channel belongs.", "SMSChannelResponse$CreationDate" : "The date that the settings were last updated in ISO 8601 format.", "SMSChannelResponse$Id" : "Channel ID. Not used, only for backwards compatibility.", "SMSChannelResponse$LastModifiedBy" : "Who last updated this entry", @@ -840,4 +868,4 @@ } } } -} \ No newline at end of file +} diff --git a/services/pinpoint/src/main/resources/codegen-resources/service-2.json b/services/pinpoint/src/main/resources/codegen-resources/service-2.json index fa7ab4c523fc..62f2f85f9755 100644 --- a/services/pinpoint/src/main/resources/codegen-resources/service-2.json +++ b/services/pinpoint/src/main/resources/codegen-resources/service-2.json @@ -3,12 +3,41 @@ "apiVersion" : "2016-12-01", "endpointPrefix" : "pinpoint", "signingName" : "mobiletargeting", - "signatureVersion":"v4", "serviceFullName" : "Amazon Pinpoint", "protocol" : "rest-json", - "jsonVersion" : "1.1" + "jsonVersion" : "1.1", + "uid" : "pinpoint-2016-12-01", + "signatureVersion" : "v4" }, "operations" : { + "CreateApp" : { + "name" : "CreateApp", + "http" : { + "method" : "POST", + "requestUri" : "/v1/apps", + "responseCode" : 201 + }, + "input" : { + "shape" : "CreateAppRequest" + }, + "output" : { + "shape" : "CreateAppResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Creates or updates an app." + }, "CreateCampaign" : { "name" : "CreateCampaign", "http" : { @@ -93,6 +122,34 @@ } ], "documentation" : "Used to create or update a segment." }, + "DeleteAdmChannel" : { + "name" : "DeleteAdmChannel", + "http" : { + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}/channels/adm", + "responseCode" : 200 + }, + "input" : { + "shape" : "DeleteAdmChannelRequest" + }, + "output" : { + "shape" : "DeleteAdmChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Delete an ADM channel" + }, "DeleteApnsChannel" : { "name" : "DeleteApnsChannel", "http" : { @@ -149,18 +206,18 @@ } ], "documentation" : "Delete an APNS sandbox channel" }, - "DeleteCampaign" : { - "name" : "DeleteCampaign", + "DeleteApnsVoipChannel" : { + "name" : "DeleteApnsVoipChannel", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip", "responseCode" : 200 }, "input" : { - "shape" : "DeleteCampaignRequest" + "shape" : "DeleteApnsVoipChannelRequest" }, "output" : { - "shape" : "DeleteCampaignResponse" + "shape" : "DeleteApnsVoipChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -175,20 +232,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Deletes a campaign." + "documentation" : "Delete an APNS VoIP channel" }, - "DeleteEmailChannel" : { - "name" : "DeleteEmailChannel", + "DeleteApnsVoipSandboxChannel" : { + "name" : "DeleteApnsVoipSandboxChannel", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/channels/email", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip_sandbox", "responseCode" : 200 }, "input" : { - "shape" : "DeleteEmailChannelRequest" + "shape" : "DeleteApnsVoipSandboxChannelRequest" }, "output" : { - "shape" : "DeleteEmailChannelResponse" + "shape" : "DeleteApnsVoipSandboxChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -203,20 +260,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Delete an email channel" + "documentation" : "Delete an APNS VoIP sandbox channel" }, - "DeleteEventStream" : { - "name" : "DeleteEventStream", + "DeleteApp" : { + "name" : "DeleteApp", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/eventstream", + "requestUri" : "/v1/apps/{application-id}", "responseCode" : 200 }, "input" : { - "shape" : "DeleteEventStreamRequest" + "shape" : "DeleteAppRequest" }, "output" : { - "shape" : "DeleteEventStreamResponse" + "shape" : "DeleteAppResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -231,20 +288,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Deletes the event stream for an app." + "documentation" : "Deletes an app." }, - "DeleteGcmChannel" : { - "name" : "DeleteGcmChannel", + "DeleteBaiduChannel" : { + "name" : "DeleteBaiduChannel", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/channels/gcm", + "requestUri" : "/v1/apps/{application-id}/channels/baidu", "responseCode" : 200 }, "input" : { - "shape" : "DeleteGcmChannelRequest" + "shape" : "DeleteBaiduChannelRequest" }, "output" : { - "shape" : "DeleteGcmChannelResponse" + "shape" : "DeleteBaiduChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -259,20 +316,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Deletes the GCM channel for an app." + "documentation" : "Delete a BAIDU GCM channel" }, - "DeleteSegment" : { - "name" : "DeleteSegment", + "DeleteCampaign" : { + "name" : "DeleteCampaign", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", "responseCode" : 200 }, "input" : { - "shape" : "DeleteSegmentRequest" + "shape" : "DeleteCampaignRequest" }, "output" : { - "shape" : "DeleteSegmentResponse" + "shape" : "DeleteCampaignResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -287,20 +344,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Deletes a segment." + "documentation" : "Deletes a campaign." }, - "DeleteSmsChannel" : { - "name" : "DeleteSmsChannel", + "DeleteEmailChannel" : { + "name" : "DeleteEmailChannel", "http" : { "method" : "DELETE", - "requestUri" : "/v1/apps/{application-id}/channels/sms", + "requestUri" : "/v1/apps/{application-id}/channels/email", "responseCode" : 200 }, "input" : { - "shape" : "DeleteSmsChannelRequest" + "shape" : "DeleteEmailChannelRequest" }, "output" : { - "shape" : "DeleteSmsChannelResponse" + "shape" : "DeleteEmailChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -315,20 +372,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Delete an SMS channel" + "documentation" : "Delete an email channel" }, - "GetApnsChannel" : { - "name" : "GetApnsChannel", + "DeleteEventStream" : { + "name" : "DeleteEventStream", "http" : { - "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/channels/apns", + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}/eventstream", "responseCode" : 200 }, "input" : { - "shape" : "GetApnsChannelRequest" + "shape" : "DeleteEventStreamRequest" }, "output" : { - "shape" : "GetApnsChannelResponse" + "shape" : "DeleteEventStreamResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -343,20 +400,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about the APNs channel for an app." + "documentation" : "Deletes the event stream for an app." }, - "GetApnsSandboxChannel" : { - "name" : "GetApnsSandboxChannel", + "DeleteGcmChannel" : { + "name" : "DeleteGcmChannel", "http" : { - "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/channels/apns_sandbox", + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}/channels/gcm", "responseCode" : 200 }, "input" : { - "shape" : "GetApnsSandboxChannelRequest" + "shape" : "DeleteGcmChannelRequest" }, "output" : { - "shape" : "GetApnsSandboxChannelResponse" + "shape" : "DeleteGcmChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -371,20 +428,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Get an APNS sandbox channel" + "documentation" : "Deletes the GCM channel for an app." }, - "GetApplicationSettings" : { - "name" : "GetApplicationSettings", + "DeleteSegment" : { + "name" : "DeleteSegment", "http" : { - "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/settings", + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", "responseCode" : 200 }, "input" : { - "shape" : "GetApplicationSettingsRequest" + "shape" : "DeleteSegmentRequest" }, "output" : { - "shape" : "GetApplicationSettingsResponse" + "shape" : "DeleteSegmentResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -399,20 +456,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Used to request the settings for an app." + "documentation" : "Deletes a segment." }, - "GetCampaign" : { - "name" : "GetCampaign", + "DeleteSmsChannel" : { + "name" : "DeleteSmsChannel", "http" : { - "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", + "method" : "DELETE", + "requestUri" : "/v1/apps/{application-id}/channels/sms", "responseCode" : 200 }, "input" : { - "shape" : "GetCampaignRequest" + "shape" : "DeleteSmsChannelRequest" }, "output" : { - "shape" : "GetCampaignResponse" + "shape" : "DeleteSmsChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -427,20 +484,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about a campaign." + "documentation" : "Delete an SMS channel" }, - "GetCampaignActivities" : { - "name" : "GetCampaignActivities", + "GetAdmChannel" : { + "name" : "GetAdmChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/activities", + "requestUri" : "/v1/apps/{application-id}/channels/adm", "responseCode" : 200 }, "input" : { - "shape" : "GetCampaignActivitiesRequest" + "shape" : "GetAdmChannelRequest" }, "output" : { - "shape" : "GetCampaignActivitiesResponse" + "shape" : "GetAdmChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -455,20 +512,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about the activity performed by a campaign." + "documentation" : "Get an ADM channel" }, - "GetCampaignVersion" : { - "name" : "GetCampaignVersion", + "GetApnsChannel" : { + "name" : "GetApnsChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/versions/{version}", + "requestUri" : "/v1/apps/{application-id}/channels/apns", "responseCode" : 200 }, "input" : { - "shape" : "GetCampaignVersionRequest" + "shape" : "GetApnsChannelRequest" }, "output" : { - "shape" : "GetCampaignVersionResponse" + "shape" : "GetApnsChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -483,20 +540,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about a specific version of a campaign." + "documentation" : "Returns information about the APNs channel for an app." }, - "GetCampaignVersions" : { - "name" : "GetCampaignVersions", + "GetApnsSandboxChannel" : { + "name" : "GetApnsSandboxChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/versions", + "requestUri" : "/v1/apps/{application-id}/channels/apns_sandbox", "responseCode" : 200 }, "input" : { - "shape" : "GetCampaignVersionsRequest" + "shape" : "GetApnsSandboxChannelRequest" }, "output" : { - "shape" : "GetCampaignVersionsResponse" + "shape" : "GetApnsSandboxChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -511,20 +568,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about your campaign versions." + "documentation" : "Get an APNS sandbox channel" }, - "GetCampaigns" : { - "name" : "GetCampaigns", + "GetApnsVoipChannel" : { + "name" : "GetApnsVoipChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/campaigns", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip", "responseCode" : 200 }, "input" : { - "shape" : "GetCampaignsRequest" + "shape" : "GetApnsVoipChannelRequest" }, "output" : { - "shape" : "GetCampaignsResponse" + "shape" : "GetApnsVoipChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -539,20 +596,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about your campaigns." + "documentation" : "Get an APNS VoIP channel" }, - "GetEmailChannel" : { - "name" : "GetEmailChannel", + "GetApnsVoipSandboxChannel" : { + "name" : "GetApnsVoipSandboxChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/channels/email", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip_sandbox", "responseCode" : 200 }, "input" : { - "shape" : "GetEmailChannelRequest" + "shape" : "GetApnsVoipSandboxChannelRequest" }, "output" : { - "shape" : "GetEmailChannelResponse" + "shape" : "GetApnsVoipSandboxChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -567,20 +624,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Get an email channel" + "documentation" : "Get an APNS VoipSandbox channel" }, - "GetEndpoint" : { - "name" : "GetEndpoint", + "GetApp" : { + "name" : "GetApp", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/endpoints/{endpoint-id}", + "requestUri" : "/v1/apps/{application-id}", "responseCode" : 200 }, "input" : { - "shape" : "GetEndpointRequest" + "shape" : "GetAppRequest" }, "output" : { - "shape" : "GetEndpointResponse" + "shape" : "GetAppResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -595,20 +652,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about an endpoint." + "documentation" : "Returns information about an app." }, - "GetEventStream" : { - "name" : "GetEventStream", + "GetApplicationSettings" : { + "name" : "GetApplicationSettings", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/eventstream", + "requestUri" : "/v1/apps/{application-id}/settings", "responseCode" : 200 }, "input" : { - "shape" : "GetEventStreamRequest" + "shape" : "GetApplicationSettingsRequest" }, "output" : { - "shape" : "GetEventStreamResponse" + "shape" : "GetApplicationSettingsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -623,20 +680,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns the event stream for an app." + "documentation" : "Used to request the settings for an app." }, - "GetGcmChannel" : { - "name" : "GetGcmChannel", + "GetApps" : { + "name" : "GetApps", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/channels/gcm", + "requestUri" : "/v1/apps", "responseCode" : 200 }, "input" : { - "shape" : "GetGcmChannelRequest" + "shape" : "GetAppsRequest" }, "output" : { - "shape" : "GetGcmChannelResponse" + "shape" : "GetAppsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -651,20 +708,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about the GCM channel for an app." + "documentation" : "Returns information about your apps." }, - "GetImportJob" : { - "name" : "GetImportJob", + "GetBaiduChannel" : { + "name" : "GetBaiduChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/jobs/import/{job-id}", + "requestUri" : "/v1/apps/{application-id}/channels/baidu", "responseCode" : 200 }, "input" : { - "shape" : "GetImportJobRequest" + "shape" : "GetBaiduChannelRequest" }, "output" : { - "shape" : "GetImportJobResponse" + "shape" : "GetBaiduChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -679,20 +736,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about an import job." + "documentation" : "Get a BAIDU GCM channel" }, - "GetImportJobs" : { - "name" : "GetImportJobs", + "GetCampaign" : { + "name" : "GetCampaign", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/jobs/import", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", "responseCode" : 200 }, "input" : { - "shape" : "GetImportJobsRequest" + "shape" : "GetCampaignRequest" }, "output" : { - "shape" : "GetImportJobsResponse" + "shape" : "GetCampaignResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -707,20 +764,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about your import jobs." + "documentation" : "Returns information about a campaign." }, - "GetSegment" : { - "name" : "GetSegment", + "GetCampaignActivities" : { + "name" : "GetCampaignActivities", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/activities", "responseCode" : 200 }, "input" : { - "shape" : "GetSegmentRequest" + "shape" : "GetCampaignActivitiesRequest" }, "output" : { - "shape" : "GetSegmentResponse" + "shape" : "GetCampaignActivitiesResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -735,20 +792,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about a segment." + "documentation" : "Returns information about the activity performed by a campaign." }, - "GetSegmentImportJobs" : { - "name" : "GetSegmentImportJobs", + "GetCampaignVersion" : { + "name" : "GetCampaignVersion", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/jobs/import", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/versions/{version}", "responseCode" : 200 }, "input" : { - "shape" : "GetSegmentImportJobsRequest" + "shape" : "GetCampaignVersionRequest" }, "output" : { - "shape" : "GetSegmentImportJobsResponse" + "shape" : "GetCampaignVersionResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -763,20 +820,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns a list of import jobs for a specific segment." + "documentation" : "Returns information about a specific version of a campaign." }, - "GetSegmentVersion" : { - "name" : "GetSegmentVersion", + "GetCampaignVersions" : { + "name" : "GetCampaignVersions", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/versions/{version}", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}/versions", "responseCode" : 200 }, "input" : { - "shape" : "GetSegmentVersionRequest" + "shape" : "GetCampaignVersionsRequest" }, "output" : { - "shape" : "GetSegmentVersionResponse" + "shape" : "GetCampaignVersionsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -791,20 +848,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about a segment version." + "documentation" : "Returns information about your campaign versions." }, - "GetSegmentVersions" : { - "name" : "GetSegmentVersions", + "GetCampaigns" : { + "name" : "GetCampaigns", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/versions", + "requestUri" : "/v1/apps/{application-id}/campaigns", "responseCode" : 200 }, "input" : { - "shape" : "GetSegmentVersionsRequest" + "shape" : "GetCampaignsRequest" }, "output" : { - "shape" : "GetSegmentVersionsResponse" + "shape" : "GetCampaignsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -819,20 +876,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Returns information about your segment versions." + "documentation" : "Returns information about your campaigns." }, - "GetSegments" : { - "name" : "GetSegments", + "GetEmailChannel" : { + "name" : "GetEmailChannel", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/segments", + "requestUri" : "/v1/apps/{application-id}/channels/email", "responseCode" : 200 }, "input" : { - "shape" : "GetSegmentsRequest" + "shape" : "GetEmailChannelRequest" }, "output" : { - "shape" : "GetSegmentsResponse" + "shape" : "GetEmailChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -847,20 +904,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Used to get information about your segments." + "documentation" : "Get an email channel" }, - "GetSmsChannel" : { - "name" : "GetSmsChannel", + "GetEndpoint" : { + "name" : "GetEndpoint", "http" : { "method" : "GET", - "requestUri" : "/v1/apps/{application-id}/channels/sms", + "requestUri" : "/v1/apps/{application-id}/endpoints/{endpoint-id}", "responseCode" : 200 }, "input" : { - "shape" : "GetSmsChannelRequest" + "shape" : "GetEndpointRequest" }, "output" : { - "shape" : "GetSmsChannelResponse" + "shape" : "GetEndpointResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -875,20 +932,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Get an SMS channel" + "documentation" : "Returns information about an endpoint." }, - "PutEventStream" : { - "name" : "PutEventStream", + "GetEventStream" : { + "name" : "GetEventStream", "http" : { - "method" : "POST", + "method" : "GET", "requestUri" : "/v1/apps/{application-id}/eventstream", "responseCode" : 200 }, "input" : { - "shape" : "PutEventStreamRequest" + "shape" : "GetEventStreamRequest" }, "output" : { - "shape" : "PutEventStreamResponse" + "shape" : "GetEventStreamResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -903,20 +960,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to create or update the event stream for an app." + "documentation" : "Returns the event stream for an app." }, - "SendMessages" : { - "name" : "SendMessages", + "GetGcmChannel" : { + "name" : "GetGcmChannel", "http" : { - "method" : "POST", - "requestUri" : "/v1/apps/{application-id}/messages", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/channels/gcm", "responseCode" : 200 }, "input" : { - "shape" : "SendMessagesRequest" + "shape" : "GetGcmChannelRequest" }, "output" : { - "shape" : "SendMessagesResponse" + "shape" : "GetGcmChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -931,20 +988,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Send a batch of messages" + "documentation" : "Returns information about the GCM channel for an app." }, - "UpdateApnsChannel" : { - "name" : "UpdateApnsChannel", + "GetImportJob" : { + "name" : "GetImportJob", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/channels/apns", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/jobs/import/{job-id}", "responseCode" : 200 }, "input" : { - "shape" : "UpdateApnsChannelRequest" + "shape" : "GetImportJobRequest" }, "output" : { - "shape" : "UpdateApnsChannelResponse" + "shape" : "GetImportJobResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -959,20 +1016,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update the APNs channel for an app." + "documentation" : "Returns information about an import job." }, - "UpdateApnsSandboxChannel" : { - "name" : "UpdateApnsSandboxChannel", + "GetImportJobs" : { + "name" : "GetImportJobs", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/channels/apns_sandbox", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/jobs/import", "responseCode" : 200 }, "input" : { - "shape" : "UpdateApnsSandboxChannelRequest" + "shape" : "GetImportJobsRequest" }, "output" : { - "shape" : "UpdateApnsSandboxChannelResponse" + "shape" : "GetImportJobsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -987,20 +1044,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Update an APNS sandbox channel" + "documentation" : "Returns information about your import jobs." }, - "UpdateApplicationSettings" : { - "name" : "UpdateApplicationSettings", + "GetSegment" : { + "name" : "GetSegment", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/settings", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", "responseCode" : 200 }, "input" : { - "shape" : "UpdateApplicationSettingsRequest" + "shape" : "GetSegmentRequest" }, "output" : { - "shape" : "UpdateApplicationSettingsResponse" + "shape" : "GetSegmentResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1015,20 +1072,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Used to update the settings for an app." + "documentation" : "Returns information about a segment." }, - "UpdateCampaign" : { - "name" : "UpdateCampaign", + "GetSegmentImportJobs" : { + "name" : "GetSegmentImportJobs", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/jobs/import", "responseCode" : 200 }, "input" : { - "shape" : "UpdateCampaignRequest" + "shape" : "GetSegmentImportJobsRequest" }, "output" : { - "shape" : "UpdateCampaignResponse" + "shape" : "GetSegmentImportJobsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1043,20 +1100,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update a campaign." + "documentation" : "Returns a list of import jobs for a specific segment." }, - "UpdateEmailChannel" : { - "name" : "UpdateEmailChannel", + "GetSegmentVersion" : { + "name" : "GetSegmentVersion", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/channels/email", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/versions/{version}", "responseCode" : 200 }, "input" : { - "shape" : "UpdateEmailChannelRequest" + "shape" : "GetSegmentVersionRequest" }, "output" : { - "shape" : "UpdateEmailChannelResponse" + "shape" : "GetSegmentVersionResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1071,20 +1128,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Update an email channel" + "documentation" : "Returns information about a segment version." }, - "UpdateEndpoint" : { - "name" : "UpdateEndpoint", + "GetSegmentVersions" : { + "name" : "GetSegmentVersions", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/endpoints/{endpoint-id}", - "responseCode" : 202 + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}/versions", + "responseCode" : 200 }, "input" : { - "shape" : "UpdateEndpointRequest" + "shape" : "GetSegmentVersionsRequest" }, "output" : { - "shape" : "UpdateEndpointResponse" + "shape" : "GetSegmentVersionsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1099,20 +1156,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update an endpoint." + "documentation" : "Returns information about your segment versions." }, - "UpdateEndpointsBatch" : { - "name" : "UpdateEndpointsBatch", + "GetSegments" : { + "name" : "GetSegments", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/endpoints", - "responseCode" : 202 + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/segments", + "responseCode" : 200 }, "input" : { - "shape" : "UpdateEndpointsBatchRequest" + "shape" : "GetSegmentsRequest" }, "output" : { - "shape" : "UpdateEndpointsBatchResponse" + "shape" : "GetSegmentsResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1127,20 +1184,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update a batch of endpoints." + "documentation" : "Used to get information about your segments." }, - "UpdateGcmChannel" : { - "name" : "UpdateGcmChannel", + "GetSmsChannel" : { + "name" : "GetSmsChannel", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/channels/gcm", + "method" : "GET", + "requestUri" : "/v1/apps/{application-id}/channels/sms", "responseCode" : 200 }, "input" : { - "shape" : "UpdateGcmChannelRequest" + "shape" : "GetSmsChannelRequest" }, "output" : { - "shape" : "UpdateGcmChannelResponse" + "shape" : "GetSmsChannelResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1155,20 +1212,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update the GCM channel for an app." + "documentation" : "Get an SMS channel" }, - "UpdateSegment" : { - "name" : "UpdateSegment", + "PutEventStream" : { + "name" : "PutEventStream", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", + "method" : "POST", + "requestUri" : "/v1/apps/{application-id}/eventstream", "responseCode" : 200 }, "input" : { - "shape" : "UpdateSegmentRequest" + "shape" : "PutEventStreamRequest" }, "output" : { - "shape" : "UpdateSegmentResponse" + "shape" : "PutEventStreamResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1183,20 +1240,20 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Use to update a segment." + "documentation" : "Use to create or update the event stream for an app." }, - "UpdateSmsChannel" : { - "name" : "UpdateSmsChannel", + "SendMessages" : { + "name" : "SendMessages", "http" : { - "method" : "PUT", - "requestUri" : "/v1/apps/{application-id}/channels/sms", + "method" : "POST", + "requestUri" : "/v1/apps/{application-id}/messages", "responseCode" : 200 }, "input" : { - "shape" : "UpdateSmsChannelRequest" + "shape" : "SendMessagesRequest" }, "output" : { - "shape" : "UpdateSmsChannelResponse" + "shape" : "SendMessagesResponse" }, "errors" : [ { "shape" : "BadRequestException" @@ -1211,135 +1268,929 @@ }, { "shape" : "TooManyRequestsException" } ], - "documentation" : "Update an SMS channel" - } - }, - "shapes" : { - "APNSChannelRequest" : { - "type" : "structure", - "members" : { - "Certificate" : { - "shape" : "__string", - "documentation" : "The distribution certificate from Apple." - }, - "Enabled" : { - "shape" : "__boolean", - "documentation" : "If the channel is enabled for sending messages." - }, - "PrivateKey" : { - "shape" : "__string", - "documentation" : "The certificate private key." - } - }, - "documentation" : "Apple Push Notification Service channel definition." + "documentation" : "Send a batch of messages" }, - "APNSChannelResponse" : { - "type" : "structure", - "members" : { - "ApplicationId" : { - "shape" : "__string", - "documentation" : "The ID of the application to which the channel applies." - }, - "CreationDate" : { - "shape" : "__string", - "documentation" : "When was this segment created" - }, - "Enabled" : { - "shape" : "__boolean", - "documentation" : "If the channel is enabled for sending messages." - }, - "Id" : { - "shape" : "__string", - "documentation" : "Channel ID. Not used, only for backwards compatibility." + "SendUsersMessages" : { + "name" : "SendUsersMessages", + "http" : { + "method" : "POST", + "requestUri" : "/v1/apps/{application-id}/users-messages", + "responseCode" : 200 + }, + "input" : { + "shape" : "SendUsersMessagesRequest" + }, + "output" : { + "shape" : "SendUsersMessagesResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Send a batch of messages to users" + }, + "UpdateAdmChannel" : { + "name" : "UpdateAdmChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/adm", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateAdmChannelRequest" + }, + "output" : { + "shape" : "UpdateAdmChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an ADM channel" + }, + "UpdateApnsChannel" : { + "name" : "UpdateApnsChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/apns", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateApnsChannelRequest" + }, + "output" : { + "shape" : "UpdateApnsChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update the APNs channel for an app." + }, + "UpdateApnsSandboxChannel" : { + "name" : "UpdateApnsSandboxChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/apns_sandbox", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateApnsSandboxChannelRequest" + }, + "output" : { + "shape" : "UpdateApnsSandboxChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an APNS sandbox channel" + }, + "UpdateApnsVoipChannel" : { + "name" : "UpdateApnsVoipChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateApnsVoipChannelRequest" + }, + "output" : { + "shape" : "UpdateApnsVoipChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an APNS VoIP channel" + }, + "UpdateApnsVoipSandboxChannel" : { + "name" : "UpdateApnsVoipSandboxChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/apns_voip_sandbox", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateApnsVoipSandboxChannelRequest" + }, + "output" : { + "shape" : "UpdateApnsVoipSandboxChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an APNS VoIP sandbox channel" + }, + "UpdateApplicationSettings" : { + "name" : "UpdateApplicationSettings", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/settings", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateApplicationSettingsRequest" + }, + "output" : { + "shape" : "UpdateApplicationSettingsResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Used to update the settings for an app." + }, + "UpdateBaiduChannel" : { + "name" : "UpdateBaiduChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/baidu", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateBaiduChannelRequest" + }, + "output" : { + "shape" : "UpdateBaiduChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update a BAIDU GCM channel" + }, + "UpdateCampaign" : { + "name" : "UpdateCampaign", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/campaigns/{campaign-id}", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateCampaignRequest" + }, + "output" : { + "shape" : "UpdateCampaignResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update a campaign." + }, + "UpdateEmailChannel" : { + "name" : "UpdateEmailChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/email", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateEmailChannelRequest" + }, + "output" : { + "shape" : "UpdateEmailChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an email channel" + }, + "UpdateEndpoint" : { + "name" : "UpdateEndpoint", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/endpoints/{endpoint-id}", + "responseCode" : 202 + }, + "input" : { + "shape" : "UpdateEndpointRequest" + }, + "output" : { + "shape" : "UpdateEndpointResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update an endpoint." + }, + "UpdateEndpointsBatch" : { + "name" : "UpdateEndpointsBatch", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/endpoints", + "responseCode" : 202 + }, + "input" : { + "shape" : "UpdateEndpointsBatchRequest" + }, + "output" : { + "shape" : "UpdateEndpointsBatchResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update a batch of endpoints." + }, + "UpdateGcmChannel" : { + "name" : "UpdateGcmChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/gcm", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateGcmChannelRequest" + }, + "output" : { + "shape" : "UpdateGcmChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update the GCM channel for an app." + }, + "UpdateSegment" : { + "name" : "UpdateSegment", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/segments/{segment-id}", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateSegmentRequest" + }, + "output" : { + "shape" : "UpdateSegmentResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Use to update a segment." + }, + "UpdateSmsChannel" : { + "name" : "UpdateSmsChannel", + "http" : { + "method" : "PUT", + "requestUri" : "/v1/apps/{application-id}/channels/sms", + "responseCode" : 200 + }, + "input" : { + "shape" : "UpdateSmsChannelRequest" + }, + "output" : { + "shape" : "UpdateSmsChannelResponse" + }, + "errors" : [ { + "shape" : "BadRequestException" + }, { + "shape" : "InternalServerErrorException" + }, { + "shape" : "ForbiddenException" + }, { + "shape" : "NotFoundException" + }, { + "shape" : "MethodNotAllowedException" + }, { + "shape" : "TooManyRequestsException" + } ], + "documentation" : "Update an SMS channel" + } + }, + "shapes" : { + "ADMChannelRequest" : { + "type" : "structure", + "members" : { + "ClientId" : { + "shape" : "__string", + "documentation" : "Client ID as gotten from Amazon" + }, + "ClientSecret" : { + "shape" : "__string", + "documentation" : "Client secret as gotten from Amazon" + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + } + }, + "documentation" : "Amazon Device Messaging channel definition." + }, + "ADMChannelResponse" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "Application id" + }, + "CreationDate" : { + "shape" : "__string", + "documentation" : "When was this segment created" + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, + "Id" : { + "shape" : "__string", + "documentation" : "Channel ID. Not used, only for backwards compatibility." + }, + "IsArchived" : { + "shape" : "__boolean", + "documentation" : "Is this channel archived" + }, + "LastModifiedBy" : { + "shape" : "__string", + "documentation" : "Who last updated this entry" + }, + "LastModifiedDate" : { + "shape" : "__string", + "documentation" : "Last date this was updated" + }, + "Platform" : { + "shape" : "__string", + "documentation" : "Platform type. Will be \"ADM\"" + }, + "Version" : { + "shape" : "__integer", + "documentation" : "Version of channel" + } + }, + "documentation" : "Amazon Device Messaging channel definition." + }, + "ADMMessage" : { + "type" : "structure", + "members" : { + "Action" : { + "shape" : "Action", + "documentation" : "The action that occurs if the user taps a push notification delivered by the campaign: OPEN_APP - Your app launches, or it becomes the foreground app if it has been sent to the background. This is the default action. DEEP_LINK - Uses deep linking features in iOS and Android to open your app and display a designated user interface within the app. URL - The default mobile browser on the user's device launches and opens a web page at the URL you specify. Possible values include: OPEN_APP | DEEP_LINK | URL" + }, + "Body" : { + "shape" : "__string", + "documentation" : "The message body of the notification, the email body or the text message." + }, + "ConsolidationKey" : { + "shape" : "__string", + "documentation" : "Optional. Arbitrary string used to indicate multiple messages are logically the same and that ADM is allowed to drop previously enqueued messages in favor of this one." + }, + "Data" : { + "shape" : "MapOf__string", + "documentation" : "The data payload used for a silent push. This payload is added to the notifications' data.pinpoint.jsonBody' object" + }, + "ExpiresAfter" : { + "shape" : "__string", + "documentation" : "Optional. Number of seconds ADM should retain the message if the device is offline" + }, + "IconReference" : { + "shape" : "__string", + "documentation" : "The icon image name of the asset saved in your application." + }, + "ImageIconUrl" : { + "shape" : "__string", + "documentation" : "The URL that points to an image used as the large icon to the notification content view." + }, + "ImageUrl" : { + "shape" : "__string", + "documentation" : "The URL that points to an image used in the push notification." + }, + "MD5" : { + "shape" : "__string", + "documentation" : "Optional. Base-64-encoded MD5 checksum of the data parameter. Used to verify data integrity" + }, + "RawContent" : { + "shape" : "__string", + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + }, + "SilentPush" : { + "shape" : "__boolean", + "documentation" : "Indicates if the message should display on the users device. Silent pushes can be used for Remote Configuration and Phone Home use cases." + }, + "SmallImageIconUrl" : { + "shape" : "__string", + "documentation" : "The URL that points to an image used as the small icon for the notification which will be used to represent the notification in the status bar and content view" + }, + "Sound" : { + "shape" : "__string", + "documentation" : "Indicates a sound to play when the device receives the notification. Supports default, or the filename of a sound resource bundled in the app. Android sound files must reside in /res/raw/" + }, + "Substitutions" : { + "shape" : "MapOfListOf__string", + "documentation" : "Default message substitutions. Can be overridden by individual address substitutions." + }, + "Title" : { + "shape" : "__string", + "documentation" : "The message title that displays above the message on the user's device." + }, + "Url" : { + "shape" : "__string", + "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." + } + }, + "documentation" : "ADM Message." + }, + "APNSChannelRequest" : { + "type" : "structure", + "members" : { + "BundleId" : { + "shape" : "__string", + "documentation" : "The bundle id used for APNs Tokens." + }, + "Certificate" : { + "shape" : "__string", + "documentation" : "The distribution certificate from Apple." + }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "PrivateKey" : { + "shape" : "__string", + "documentation" : "The certificate private key." + }, + "TeamId" : { + "shape" : "__string", + "documentation" : "The team id used for APNs Tokens." + }, + "TokenKey" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." + }, + "TokenKeyId" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." + } + }, + "documentation" : "Apple Push Notification Service channel definition." + }, + "APNSChannelResponse" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "The ID of the application to which the channel applies." + }, + "CreationDate" : { + "shape" : "__string", + "documentation" : "When was this segment created" + }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, + "HasTokenKey" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a token key for authentication." + }, + "Id" : { + "shape" : "__string", + "documentation" : "Channel ID. Not used. Present only for backwards compatibility." + }, + "IsArchived" : { + "shape" : "__boolean", + "documentation" : "Is this channel archived" + }, + "LastModifiedBy" : { + "shape" : "__string", + "documentation" : "Who last updated this entry" + }, + "LastModifiedDate" : { + "shape" : "__string", + "documentation" : "Last date this was updated" + }, + "Platform" : { + "shape" : "__string", + "documentation" : "The platform type. Will be APNS." + }, + "Version" : { + "shape" : "__integer", + "documentation" : "Version of channel" + } + }, + "documentation" : "Apple Distribution Push Notification Service channel definition." + }, + "APNSMessage" : { + "type" : "structure", + "members" : { + "Action" : { + "shape" : "Action", + "documentation" : "The action that occurs if the user taps a push notification delivered by the campaign: OPEN_APP - Your app launches, or it becomes the foreground app if it has been sent to the background. This is the default action. DEEP_LINK - Uses deep linking features in iOS and Android to open your app and display a designated user interface within the app. URL - The default mobile browser on the user's device launches and opens a web page at the URL you specify. Possible values include: OPEN_APP | DEEP_LINK | URL" + }, + "Badge" : { + "shape" : "__integer", + "documentation" : "Include this key when you want the system to modify the badge of your app icon. If this key is not included in the dictionary, the badge is not changed. To remove the badge, set the value of this key to 0." + }, + "Body" : { + "shape" : "__string", + "documentation" : "The message body of the notification, the email body or the text message." + }, + "Category" : { + "shape" : "__string", + "documentation" : "Provide this key with a string value that represents the notification's type. This value corresponds to the value in the identifier property of one of your app's registered categories." + }, + "CollapseId" : { + "shape" : "__string", + "documentation" : "Multiple notifications with the same collapse identifier are displayed to the user as a single notification. The value of this key must not exceed 64 bytes." + }, + "Data" : { + "shape" : "MapOf__string", + "documentation" : "The data payload used for a silent push. This payload is added to the notifications' data.pinpoint.jsonBody' object" + }, + "MediaUrl" : { + "shape" : "__string", + "documentation" : "The URL that points to a video used in the push notification." + }, + "PreferredAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The preferred authentication method, either \"CERTIFICATE\" or \"TOKEN\"" + }, + "Priority" : { + "shape" : "__string", + "documentation" : "Is this a transaction priority message or lower priority." + }, + "RawContent" : { + "shape" : "__string", + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + }, + "SilentPush" : { + "shape" : "__boolean", + "documentation" : "Indicates if the message should display on the users device. Silent pushes can be used for Remote Configuration and Phone Home use cases." + }, + "Sound" : { + "shape" : "__string", + "documentation" : "Include this key when you want the system to play a sound. The value of this key is the name of a sound file in your app's main bundle or in the Library/Sounds folder of your app's data container. If the sound file cannot be found, or if you specify defaultfor the value, the system plays the default alert sound." + }, + "Substitutions" : { + "shape" : "MapOfListOf__string", + "documentation" : "Default message substitutions. Can be overridden by individual address substitutions." + }, + "ThreadId" : { + "shape" : "__string", + "documentation" : "Provide this key with a string value that represents the app-specific identifier for grouping notifications. If you provide a Notification Content app extension, you can use this value to group your notifications together." + }, + "TimeToLive" : { + "shape" : "__integer", + "documentation" : "This parameter specifies how long (in seconds) the message should be kept if APNS is unable to deliver the notification the first time. If the value is 0, APNS treats the notification as if it expires immediately and does not store the notification or attempt to redeliver it. This value is converted to the expiration field when sent to APNS" + }, + "Title" : { + "shape" : "__string", + "documentation" : "The message title that displays above the message on the user's device." + }, + "Url" : { + "shape" : "__string", + "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." + } + }, + "documentation" : "APNS Message." + }, + "APNSSandboxChannelRequest" : { + "type" : "structure", + "members" : { + "BundleId" : { + "shape" : "__string", + "documentation" : "The bundle id used for APNs Tokens." + }, + "Certificate" : { + "shape" : "__string", + "documentation" : "The distribution certificate from Apple." + }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "PrivateKey" : { + "shape" : "__string", + "documentation" : "The certificate private key." + }, + "TeamId" : { + "shape" : "__string", + "documentation" : "The team id used for APNs Tokens." + }, + "TokenKey" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." + }, + "TokenKeyId" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." + } + }, + "documentation" : "Apple Development Push Notification Service channel definition." + }, + "APNSSandboxChannelResponse" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "Application id" + }, + "CreationDate" : { + "shape" : "__string", + "documentation" : "When was this segment created" + }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, + "HasTokenKey" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a token key for authentication." + }, + "Id" : { + "shape" : "__string", + "documentation" : "Channel ID. Not used, only for backwards compatibility." }, "IsArchived" : { "shape" : "__boolean", "documentation" : "Is this channel archived" }, - "LastModifiedBy" : { + "LastModifiedBy" : { + "shape" : "__string", + "documentation" : "Who last updated this entry" + }, + "LastModifiedDate" : { + "shape" : "__string", + "documentation" : "Last date this was updated" + }, + "Platform" : { + "shape" : "__string", + "documentation" : "The platform type. Will be APNS_SANDBOX." + }, + "Version" : { + "shape" : "__integer", + "documentation" : "Version of channel" + } + }, + "documentation" : "Apple Development Push Notification Service channel definition." + }, + "APNSVoipChannelRequest" : { + "type" : "structure", + "members" : { + "BundleId" : { + "shape" : "__string", + "documentation" : "The bundle id used for APNs Tokens." + }, + "Certificate" : { + "shape" : "__string", + "documentation" : "The distribution certificate from Apple." + }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "PrivateKey" : { "shape" : "__string", - "documentation" : "Who last updated this entry" + "documentation" : "The certificate private key." }, - "LastModifiedDate" : { + "TeamId" : { "shape" : "__string", - "documentation" : "Last date this was updated" + "documentation" : "The team id used for APNs Tokens." }, - "Platform" : { + "TokenKey" : { "shape" : "__string", - "documentation" : "The platform type. Will be APNS." + "documentation" : "The token key used for APNs Tokens." }, - "Version" : { - "shape" : "__integer", - "documentation" : "Version of channel" + "TokenKeyId" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." } }, - "documentation" : "Apple Distribution Push Notification Service channel definition." + "documentation" : "Apple VoIP Push Notification Service channel definition." }, - "APNSMessage" : { + "APNSVoipChannelResponse" : { "type" : "structure", "members" : { - "Action" : { - "shape" : "Action", - "documentation" : "The action that occurs if the user taps a push notification delivered by the campaign: OPEN_APP - Your app launches, or it becomes the foreground app if it has been sent to the background. This is the default action. DEEP_LINK - Uses deep linking features in iOS and Android to open your app and display a designated user interface within the app. URL - The default mobile browser on the user's device launches and opens a web page at the URL you specify. Possible values include: OPEN_APP | DEEP_LINK | URL" - }, - "Badge" : { - "shape" : "__integer", - "documentation" : "Include this key when you want the system to modify the badge of your app icon. If this key is not included in the dictionary, the badge is not changed. To remove the badge, set the value of this key to 0." - }, - "Body" : { + "ApplicationId" : { "shape" : "__string", - "documentation" : "The message body of the notification, the email body or the text message." + "documentation" : "Application id" }, - "Category" : { + "CreationDate" : { "shape" : "__string", - "documentation" : "Provide this key with a string value that represents the notification's type. This value corresponds to the value in the identifier property of one of your app's registered categories." - }, - "Data" : { - "shape" : "MapOf__string", - "documentation" : "The data payload used for a silent push. This payload is added to the notifications' data.pinpoint.jsonBody' object" + "documentation" : "When was this segment created" }, - "MediaUrl" : { + "DefaultAuthenticationMethod" : { "shape" : "__string", - "documentation" : "The URL that points to a video used in the push notification." + "documentation" : "The default authentication method used for APNs." }, - "RawContent" : { - "shape" : "__string", - "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." }, - "SilentPush" : { + "HasCredential" : { "shape" : "__boolean", - "documentation" : "Indicates if the message should display on the users device. Silent pushes can be used for Remote Configuration and Phone Home use cases." + "documentation" : "If the channel is registered with a credential for authentication." }, - "Sound" : { + "HasTokenKey" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a token key for authentication." + }, + "Id" : { "shape" : "__string", - "documentation" : "Include this key when you want the system to play a sound. The value of this key is the name of a sound file in your app's main bundle or in the Library/Sounds folder of your app's data container. If the sound file cannot be found, or if you specify defaultfor the value, the system plays the default alert sound." + "documentation" : "Channel ID. Not used, only for backwards compatibility." }, - "Substitutions" : { - "shape" : "MapOfListOf__string", - "documentation" : "Default message substitutions. Can be overridden by individual address substitutions." + "IsArchived" : { + "shape" : "__boolean", + "documentation" : "Is this channel archived" }, - "ThreadId" : { + "LastModifiedBy" : { "shape" : "__string", - "documentation" : "Provide this key with a string value that represents the app-specific identifier for grouping notifications. If you provide a Notification Content app extension, you can use this value to group your notifications together." + "documentation" : "Who made the last change" }, - "Title" : { + "LastModifiedDate" : { "shape" : "__string", - "documentation" : "The message title that displays above the message on the user's device." + "documentation" : "Last date this was updated" }, - "Url" : { + "Platform" : { "shape" : "__string", - "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." + "documentation" : "The platform type. Will be APNS." + }, + "Version" : { + "shape" : "__integer", + "documentation" : "Version of channel" } }, - "documentation" : "APNS Message." + "documentation" : "Apple VoIP Push Notification Service channel definition." }, - "APNSSandboxChannelRequest" : { + "APNSVoipSandboxChannelRequest" : { "type" : "structure", "members" : { + "BundleId" : { + "shape" : "__string", + "documentation" : "The bundle id used for APNs Tokens." + }, "Certificate" : { "shape" : "__string", "documentation" : "The distribution certificate from Apple." }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, "Enabled" : { "shape" : "__boolean", "documentation" : "If the channel is enabled for sending messages." @@ -1347,11 +2198,23 @@ "PrivateKey" : { "shape" : "__string", "documentation" : "The certificate private key." + }, + "TeamId" : { + "shape" : "__string", + "documentation" : "The team id used for APNs Tokens." + }, + "TokenKey" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." + }, + "TokenKeyId" : { + "shape" : "__string", + "documentation" : "The token key used for APNs Tokens." } }, - "documentation" : "Apple Development Push Notification Service channel definition." + "documentation" : "Apple VoIP Developer Push Notification Service channel definition." }, - "APNSSandboxChannelResponse" : { + "APNSVoipSandboxChannelResponse" : { "type" : "structure", "members" : { "ApplicationId" : { @@ -1362,10 +2225,22 @@ "shape" : "__string", "documentation" : "When was this segment created" }, + "DefaultAuthenticationMethod" : { + "shape" : "__string", + "documentation" : "The default authentication method used for APNs." + }, "Enabled" : { "shape" : "__boolean", "documentation" : "If the channel is enabled for sending messages." }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, + "HasTokenKey" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a token key for authentication." + }, "Id" : { "shape" : "__string", "documentation" : "Channel ID. Not used, only for backwards compatibility." @@ -1376,7 +2251,7 @@ }, "LastModifiedBy" : { "shape" : "__string", - "documentation" : "Who last updated this entry" + "documentation" : "Who made the last change" }, "LastModifiedDate" : { "shape" : "__string", @@ -1391,7 +2266,7 @@ "documentation" : "Version of channel" } }, - "documentation" : "Apple Development Push Notification Service channel definition." + "documentation" : "Apple VoIP Developer Push Notification Service channel definition." }, "Action" : { "type" : "string", @@ -1472,86 +2347,240 @@ "shape" : "__string", "documentation" : "Body override. If specified will override default body." }, - "ChannelType" : { - "shape" : "ChannelType", - "documentation" : "Type of channel of this address" + "ChannelType" : { + "shape" : "ChannelType", + "documentation" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL" + }, + "Context" : { + "shape" : "MapOf__string", + "documentation" : "A map of custom attributes to attributes to be attached to the message for this address. This payload is added to the push notification's 'data.pinpoint' object or added to the email/sms delivery receipt event attributes." + }, + "RawContent" : { + "shape" : "__string", + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + }, + "Substitutions" : { + "shape" : "MapOfListOf__string", + "documentation" : "A map of substitution values for the message to be merged with the DefaultMessage's substitutions. Substitutions on this map take precedence over the all other substitutions." + }, + "TitleOverride" : { + "shape" : "__string", + "documentation" : "Title override. If specified will override default title if applicable." + } + }, + "documentation" : "Address configuration." + }, + "ApplicationResponse" : { + "type" : "structure", + "members" : { + "Id" : { + "shape" : "__string", + "documentation" : "The unique application ID." + }, + "Name" : { + "shape" : "__string", + "documentation" : "The display name of the application." + } + }, + "documentation" : "Application Response." + }, + "ApplicationSettingsResource" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "The unique ID for the application." + }, + "LastModifiedDate" : { + "shape" : "__string", + "documentation" : "The date that the settings were last updated in ISO 8601 format." + }, + "Limits" : { + "shape" : "CampaignLimits", + "documentation" : "The default campaign limits for the app. These limits apply to each campaign for the app, unless the campaign overrides the default with limits of its own." + }, + "QuietTime" : { + "shape" : "QuietTime", + "documentation" : "The default quiet time for the app. Each campaign for this app sends no messages during this time unless the campaign overrides the default with a quiet time of its own." + } + }, + "documentation" : "Application settings." + }, + "ApplicationsResponse" : { + "type" : "structure", + "members" : { + "Item" : { + "shape" : "ListOfApplicationResponse", + "documentation" : "List of applications returned in this page." + }, + "NextToken" : { + "shape" : "__string", + "documentation" : "The string that you use in a subsequent request to get the next page of results in a paginated response." + } + }, + "documentation" : "Get Applications Result." + }, + "AttributeDimension" : { + "type" : "structure", + "members" : { + "AttributeType" : { + "shape" : "AttributeType", + "documentation" : "The type of dimension:\nINCLUSIVE - Endpoints that match the criteria are included in the segment.\nEXCLUSIVE - Endpoints that match the criteria are excluded from the segment." + }, + "Values" : { + "shape" : "ListOf__string", + "documentation" : "The criteria values for the segment dimension. Endpoints with matching attribute values are included or excluded from the segment, depending on the setting for Type." + } + }, + "documentation" : "Custom attibute dimension" + }, + "AttributeType" : { + "type" : "string", + "enum" : [ "INCLUSIVE", "EXCLUSIVE" ] + }, + "BadRequestException" : { + "type" : "structure", + "members" : { + "Message" : { + "shape" : "__string", + "documentation" : "The error message returned from the API." + }, + "RequestID" : { + "shape" : "__string", + "documentation" : "The unique message body ID." + } + }, + "documentation" : "Simple message object.", + "exception" : true, + "error" : { + "httpStatusCode" : 400 + } + }, + "BaiduChannelRequest" : { + "type" : "structure", + "members" : { + "ApiKey" : { + "shape" : "__string", + "documentation" : "Platform credential API key from Baidu." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "SecretKey" : { + "shape" : "__string", + "documentation" : "Platform credential Secret key from Baidu." + } + }, + "documentation" : "Baidu Cloud Push credentials" + }, + "BaiduChannelResponse" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "Application id" + }, + "CreationDate" : { + "shape" : "__string", + "documentation" : "When was this segment created" + }, + "Credential" : { + "shape" : "__string", + "documentation" : "The Baidu API key from Baidu." + }, + "Enabled" : { + "shape" : "__boolean", + "documentation" : "If the channel is enabled for sending messages." + }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, + "Id" : { + "shape" : "__string", + "documentation" : "Channel ID. Not used, only for backwards compatibility." + }, + "IsArchived" : { + "shape" : "__boolean", + "documentation" : "Is this channel archived" + }, + "LastModifiedBy" : { + "shape" : "__string", + "documentation" : "Who made the last change" + }, + "LastModifiedDate" : { + "shape" : "__string", + "documentation" : "Last date this was updated" + }, + "Platform" : { + "shape" : "__string", + "documentation" : "The platform type. Will be BAIDU" + }, + "Version" : { + "shape" : "__integer", + "documentation" : "Version of channel" + } + }, + "documentation" : "Baidu Cloud Messaging channel definition" + }, + "BaiduMessage" : { + "type" : "structure", + "members" : { + "Action" : { + "shape" : "Action", + "documentation" : "The action that occurs if the user taps a push notification delivered by the campaign: OPEN_APP - Your app launches, or it becomes the foreground app if it has been sent to the background. This is the default action. DEEP_LINK - Uses deep linking features in iOS and Android to open your app and display a designated user interface within the app. URL - The default mobile browser on the user's device launches and opens a web page at the URL you specify. Possible values include: OPEN_APP | DEEP_LINK | URL" + }, + "Body" : { + "shape" : "__string", + "documentation" : "The message body of the notification, the email body or the text message." }, - "Context" : { + "Data" : { "shape" : "MapOf__string", - "documentation" : "A map of custom attributes to attributes to be attached to the message for this address. This payload is added to the push notification's 'data.pinpoint' object or added to the email/sms delivery receipt event attributes." + "documentation" : "The data payload used for a silent push. This payload is added to the notifications' data.pinpoint.jsonBody' object" }, - "RawContent" : { + "IconReference" : { "shape" : "__string", - "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + "documentation" : "The icon image name of the asset saved in your application." }, - "Substitutions" : { - "shape" : "MapOfListOf__string", - "documentation" : "A map of substitution values for the message to be merged with the DefaultMessage's substitutions. Substitutions on this map take precedence over the all other substitutions." + "ImageIconUrl" : { + "shape" : "__string", + "documentation" : "The URL that points to an image used as the large icon to the notification content view." }, - "TitleOverride" : { + "ImageUrl" : { "shape" : "__string", - "documentation" : "Title override. If specified will override default title if applicable." - } - }, - "documentation" : "Address configuration." - }, - "ApplicationSettingsResource" : { - "type" : "structure", - "members" : { - "ApplicationId" : { + "documentation" : "The URL that points to an image used in the push notification." + }, + "RawContent" : { "shape" : "__string", - "documentation" : "The unique ID for the application." + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." }, - "LastModifiedDate" : { + "SilentPush" : { + "shape" : "__boolean", + "documentation" : "Indicates if the message should display on the users device. Silent pushes can be used for Remote Configuration and Phone Home use cases." + }, + "SmallImageIconUrl" : { "shape" : "__string", - "documentation" : "The date that the settings were last updated in ISO 8601 format." + "documentation" : "The URL that points to an image used as the small icon for the notification which will be used to represent the notification in the status bar and content view" }, - "Limits" : { - "shape" : "CampaignLimits", - "documentation" : "The default campaign limits for the app. These limits apply to each campaign for the app, unless the campaign overrides the default with limits of its own." + "Sound" : { + "shape" : "__string", + "documentation" : "Indicates a sound to play when the device receives the notification. Supports default, or the filename of a sound resource bundled in the app. Android sound files must reside in /res/raw/" }, - "QuietTime" : { - "shape" : "QuietTime", - "documentation" : "The default quiet time for the app. Each campaign for this app sends no messages during this time unless the campaign overrides the default with a quiet time of its own." - } - }, - "documentation" : "Application settings." - }, - "AttributeDimension" : { - "type" : "structure", - "members" : { - "AttributeType" : { - "shape" : "AttributeType", - "documentation" : "The type of dimension:\nINCLUSIVE - Endpoints that match the criteria are included in the segment.\nEXCLUSIVE - Endpoints that match the criteria are excluded from the segment." + "Substitutions" : { + "shape" : "MapOfListOf__string", + "documentation" : "Default message substitutions. Can be overridden by individual address substitutions." }, - "Values" : { - "shape" : "ListOf__string", - "documentation" : "The criteria values for the segment dimension. Endpoints with matching attribute values are included or excluded from the segment, depending on the setting for Type." - } - }, - "documentation" : "Custom attibute dimension" - }, - "AttributeType" : { - "type" : "string", - "enum" : [ "INCLUSIVE", "EXCLUSIVE" ] - }, - "BadRequestException" : { - "type" : "structure", - "members" : { - "Message" : { + "Title" : { "shape" : "__string", - "documentation" : "The error message returned from the API." + "documentation" : "The message title that displays above the message on the user's device." }, - "RequestID" : { + "Url" : { "shape" : "__string", - "documentation" : "The unique message body ID." + "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." } }, - "documentation" : "Simple message object.", - "exception" : true, - "error" : { - "httpStatusCode" : 400 - } + "documentation" : "Baidu Message." }, "CampaignEmailMessage" : { "type" : "structure", @@ -1560,6 +2589,10 @@ "shape" : "__string", "documentation" : "The email text body." }, + "FromAddress" : { + "shape" : "__string", + "documentation" : "The email address used to send the email from. Defaults to use FromAddress specified in the Email Channel." + }, "HtmlBody" : { "shape" : "__string", "documentation" : "The email html body." @@ -1578,6 +2611,14 @@ "shape" : "__integer", "documentation" : "The maximum number of messages that the campaign can send daily." }, + "MaximumDuration" : { + "shape" : "__integer", + "documentation" : "The maximum duration of a campaign from the scheduled start. Must be a minimum of 60 seconds." + }, + "MessagesPerSecond" : { + "shape" : "__integer", + "documentation" : "The maximum number of messages per second that the campaign will send. This is a best effort maximum cap and can go as high as 20000 and as low as 50" + }, "Total" : { "shape" : "__integer", "documentation" : "The maximum total number of messages that the campaign can send." @@ -1715,7 +2756,37 @@ }, "ChannelType" : { "type" : "string", - "enum" : [ "GCM", "APNS", "APNS_SANDBOX", "ADM", "SMS", "EMAIL" ] + "enum" : [ "GCM", "APNS", "APNS_SANDBOX", "APNS_VOIP", "APNS_VOIP_SANDBOX", "ADM", "SMS", "EMAIL", "BAIDU"] + }, + "CreateAppRequest" : { + "type" : "structure", + "members" : { + "CreateApplicationRequest" : { + "shape" : "CreateApplicationRequest" + } + }, + "required" : [ "CreateApplicationRequest" ], + "payload" : "CreateApplicationRequest" + }, + "CreateAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, + "CreateApplicationRequest" : { + "type" : "structure", + "members" : { + "Name" : { + "shape" : "__string", + "documentation" : "The display name of the application. Used in the Amazon Pinpoint console." + } + }, + "documentation" : "Application Request." }, "CreateCampaignRequest" : { "type" : "structure", @@ -1840,6 +2911,27 @@ }, "documentation" : "Default Push Notification Message." }, + "DeleteAdmChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteAdmChannelResponse" : { + "type" : "structure", + "members" : { + "ADMChannelResponse" : { + "shape" : "ADMChannelResponse" + } + }, + "required" : [ "ADMChannelResponse" ], + "payload" : "ADMChannelResponse" + }, "DeleteApnsChannelRequest" : { "type" : "structure", "members" : { @@ -1882,6 +2974,90 @@ "required" : [ "APNSSandboxChannelResponse" ], "payload" : "APNSSandboxChannelResponse" }, + "DeleteApnsVoipChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteApnsVoipChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipChannelResponse" : { + "shape" : "APNSVoipChannelResponse" + } + }, + "required" : [ "APNSVoipChannelResponse" ], + "payload" : "APNSVoipChannelResponse" + }, + "DeleteApnsVoipSandboxChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteApnsVoipSandboxChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipSandboxChannelResponse" : { + "shape" : "APNSVoipSandboxChannelResponse" + } + }, + "required" : [ "APNSVoipSandboxChannelResponse" ], + "payload" : "APNSVoipSandboxChannelResponse" + }, + "DeleteAppRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, + "DeleteBaiduChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "DeleteBaiduChannelResponse" : { + "type" : "structure", + "members" : { + "BaiduChannelResponse" : { + "shape" : "BaiduChannelResponse" + } + }, + "required" : [ "BaiduChannelResponse" ], + "payload" : "BaiduChannelResponse" + }, "DeleteCampaignRequest" : { "type" : "structure", "members" : { @@ -1936,11 +3112,11 @@ "shape" : "__string", "location" : "uri", "locationName" : "application-id", - "documentation": "ApplicationId" + "documentation" : "ApplicationId" } }, "required" : [ "ApplicationId" ], - "documentation": "DeleteEventStream Request" + "documentation" : "DeleteEventStream Request" }, "DeleteEventStreamResponse" : { "type" : "structure", @@ -1950,8 +3126,7 @@ } }, "required" : [ "EventStream" ], - "payload" : "EventStream", - "documentation": "DeleteEventStream Response" + "payload" : "EventStream" }, "DeleteGcmChannelRequest" : { "type" : "structure", @@ -2023,7 +3198,7 @@ }, "DeliveryStatus" : { "type" : "string", - "enum" : [ "SUCCESSFUL", "THROTTLED", "TEMPORARY_FAILURE", "PERMANENT_FAILURE" ] + "enum" : [ "SUCCESSFUL", "THROTTLED", "TEMPORARY_FAILURE", "PERMANENT_FAILURE", "UNKNOWN_FAILURE", "OPT_OUT", "DUPLICATE" ] }, "DimensionType" : { "type" : "string", @@ -2032,10 +3207,18 @@ "DirectMessageConfiguration" : { "type" : "structure", "members" : { + "ADMMessage" : { + "shape" : "ADMMessage", + "documentation" : "The message to ADM channels. Overrides the default push notification message." + }, "APNSMessage" : { "shape" : "APNSMessage", "documentation" : "The message to APNS channels. Overrides the default push notification message." }, + "BaiduMessage" : { + "shape" : "BaiduMessage", + "documentation" : "The message to Baidu GCM channels. Overrides the default push notification message." + }, "DefaultMessage" : { "shape" : "DefaultMessage", "documentation" : "The default message for all channels." @@ -2086,7 +3269,7 @@ "members" : { "ApplicationId" : { "shape" : "__string", - "documentation" : "Application id" + "documentation" : "The unique ID of the application to which the email channel belongs." }, "CreationDate" : { "shape" : "__string", @@ -2100,6 +3283,10 @@ "shape" : "__string", "documentation" : "The email address used to send emails from." }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, "Id" : { "shape" : "__string", "documentation" : "Channel ID. Not used, only for backwards compatibility." @@ -2148,7 +3335,7 @@ }, "ChannelType" : { "shape" : "ChannelType", - "documentation" : "The channel type.\n\nValid values: APNS, GCM" + "documentation" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL" }, "Demographic" : { "shape" : "EndpointDemographic", @@ -2176,7 +3363,7 @@ }, "OptOut" : { "shape" : "__string", - "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages." + "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages." }, "RequestId" : { "shape" : "__string", @@ -2267,6 +3454,32 @@ }, "documentation" : "Endpoint location data" }, + "EndpointMessageResult" : { + "type" : "structure", + "members" : { + "Address" : { + "shape" : "__string", + "documentation" : "Address that endpoint message was delivered to." + }, + "DeliveryStatus" : { + "shape" : "DeliveryStatus", + "documentation" : "Delivery status of message." + }, + "StatusCode" : { + "shape" : "__integer", + "documentation" : "Downstream service status code." + }, + "StatusMessage" : { + "shape" : "__string", + "documentation" : "Status message for message delivery." + }, + "UpdatedToken" : { + "shape" : "__string", + "documentation" : "If token was updated as part of delivery. (This is GCM Specific)" + } + }, + "documentation" : "The result from sending a message to an endpoint." + }, "EndpointRequest" : { "type" : "structure", "members" : { @@ -2280,7 +3493,7 @@ }, "ChannelType" : { "shape" : "ChannelType", - "documentation" : "The channel type.\n\nValid values: APNS, GCM" + "documentation" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL" }, "Demographic" : { "shape" : "EndpointDemographic", @@ -2304,7 +3517,7 @@ }, "OptOut" : { "shape" : "__string", - "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages." + "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages." }, "RequestId" : { "shape" : "__string", @@ -2334,7 +3547,7 @@ }, "ChannelType" : { "shape" : "ChannelType", - "documentation" : "The channel type.\n\nValid values: APNS, GCM" + "documentation" : "The channel type.\n\nValid values: GCM | APNS | SMS | EMAIL" }, "CohortId" : { "shape" : "__string", @@ -2370,7 +3583,7 @@ }, "OptOut" : { "shape" : "__string", - "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL – User receives all messages.\nNONE – User receives no messages." + "documentation" : "Indicates whether a user has opted out of receiving messages with one of the following values:\n\nALL - User has opted out of all messages.\n\nNONE - Users has not opted out and receives all messages." }, "RequestId" : { "shape" : "__string", @@ -2379,13 +3592,35 @@ "User" : { "shape" : "EndpointUser", "documentation" : "Custom user-specific attributes that your app reports to Amazon Pinpoint." + } + }, + "documentation" : "Endpoint response" + }, + "EndpointSendConfiguration" : { + "type" : "structure", + "members" : { + "BodyOverride" : { + "shape" : "__string", + "documentation" : "Body override. If specified will override default body." + }, + "Context" : { + "shape" : "MapOf__string", + "documentation" : "A map of custom attributes to attributes to be attached to the message for this address. This payload is added to the push notification's 'data.pinpoint' object or added to the email/sms delivery receipt event attributes." + }, + "RawContent" : { + "shape" : "__string", + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + }, + "Substitutions" : { + "shape" : "MapOfListOf__string", + "documentation" : "A map of substitution values for the message to be merged with the DefaultMessage's substitutions. Substitutions on this map take precedence over the all other substitutions." }, - "ShardId" : { + "TitleOverride" : { "shape" : "__string", - "documentation" : "The ShardId of endpoint" + "documentation" : "Title override. If specified will override default title if applicable." } }, - "documentation" : "Endpoint response" + "documentation" : "Endpoint send configuration." }, "EndpointUser" : { "type" : "structure", @@ -2490,9 +3725,13 @@ "shape" : "__boolean", "documentation" : "If the channel is enabled for sending messages." }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, "Id" : { "shape" : "__string", - "documentation" : "Channel ID. Not used, only for backwards compatibility." + "documentation" : "Channel ID. Not used. Present only for backwards compatibility." }, "IsArchived" : { "shape" : "__boolean", @@ -2548,6 +3787,10 @@ "shape" : "__string", "documentation" : "The URL that points to an image used in the push notification." }, + "Priority" : { + "shape" : "__string", + "documentation" : "Is this a transaction priority message or lower priority." + }, "RawContent" : { "shape" : "__string", "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." @@ -2572,16 +3815,41 @@ "shape" : "MapOfListOf__string", "documentation" : "Default message substitutions. Can be overridden by individual address substitutions." }, + "TimeToLive" : { + "shape" : "__integer", + "documentation" : "This parameter specifies how long (in seconds) the message should be kept in GCM storage if the device is offline. The maximum time to live supported is 4 weeks, and the default value is 4 weeks." + }, "Title" : { "shape" : "__string", "documentation" : "The message title that displays above the message on the user's device." }, "Url" : { "shape" : "__string", - "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." + "documentation" : "The URL to open in the user's mobile browser. Used if the value for Action is URL." + } + }, + "documentation" : "GCM Message." + }, + "GetAdmChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" } }, - "documentation" : "GCM Message." + "required" : [ "ApplicationId" ] + }, + "GetAdmChannelResponse" : { + "type" : "structure", + "members" : { + "ADMChannelResponse" : { + "shape" : "ADMChannelResponse" + } + }, + "required" : [ "ADMChannelResponse" ], + "payload" : "ADMChannelResponse" }, "GetApnsChannelRequest" : { "type" : "structure", @@ -2625,6 +3893,69 @@ "required" : [ "APNSSandboxChannelResponse" ], "payload" : "APNSSandboxChannelResponse" }, + "GetApnsVoipChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "GetApnsVoipChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipChannelResponse" : { + "shape" : "APNSVoipChannelResponse" + } + }, + "required" : [ "APNSVoipChannelResponse" ], + "payload" : "APNSVoipChannelResponse" + }, + "GetApnsVoipSandboxChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "GetApnsVoipSandboxChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipSandboxChannelResponse" : { + "shape" : "APNSVoipSandboxChannelResponse" + } + }, + "required" : [ "APNSVoipSandboxChannelResponse" ], + "payload" : "APNSVoipSandboxChannelResponse" + }, + "GetAppRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "GetAppResponse" : { + "type" : "structure", + "members" : { + "ApplicationResponse" : { + "shape" : "ApplicationResponse" + } + }, + "required" : [ "ApplicationResponse" ], + "payload" : "ApplicationResponse" + }, "GetApplicationSettingsRequest" : { "type" : "structure", "members" : { @@ -2646,6 +3977,52 @@ "required" : [ "ApplicationSettingsResource" ], "payload" : "ApplicationSettingsResource" }, + "GetAppsRequest" : { + "type" : "structure", + "members" : { + "PageSize" : { + "shape" : "__string", + "location" : "querystring", + "locationName" : "page-size" + }, + "Token" : { + "shape" : "__string", + "location" : "querystring", + "locationName" : "token" + } + } + }, + "GetAppsResponse" : { + "type" : "structure", + "members" : { + "ApplicationsResponse" : { + "shape" : "ApplicationsResponse" + } + }, + "required" : [ "ApplicationsResponse" ], + "payload" : "ApplicationsResponse" + }, + "GetBaiduChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId" ] + }, + "GetBaiduChannelResponse" : { + "type" : "structure", + "members" : { + "BaiduChannelResponse" : { + "shape" : "BaiduChannelResponse" + } + }, + "required" : [ "BaiduChannelResponse" ], + "payload" : "BaiduChannelResponse" + }, "GetCampaignActivitiesRequest" : { "type" : "structure", "members" : { @@ -2662,12 +4039,14 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "ApplicationId", "CampaignId" ] @@ -2755,12 +4134,14 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "ApplicationId", "CampaignId" ] @@ -2786,12 +4167,14 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "ApplicationId" ] @@ -2860,11 +4243,11 @@ "shape" : "__string", "location" : "uri", "locationName" : "application-id", - "documentation": "ApplicationId" + "documentation" : "ApplicationId" } }, "required" : [ "ApplicationId" ], - "documentation": "GetEventStream Request" + "documentation" : "GetEventStreamRequest" }, "GetEventStreamResponse" : { "type" : "structure", @@ -2874,8 +4257,7 @@ } }, "required" : [ "EventStream" ], - "payload" : "EventStream", - "documentation": "GetEventStream Response" + "payload" : "EventStream" }, "GetGcmChannelRequest" : { "type" : "structure", @@ -2935,12 +4317,14 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "ApplicationId" ] @@ -2966,7 +4350,8 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "SegmentId" : { "shape" : "__string", @@ -2976,7 +4361,8 @@ "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "SegmentId", "ApplicationId" ] @@ -3059,7 +4445,8 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "SegmentId" : { "shape" : "__string", @@ -3069,7 +4456,8 @@ "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "SegmentId", "ApplicationId" ] @@ -3095,12 +4483,14 @@ "PageSize" : { "shape" : "__string", "location" : "querystring", - "locationName" : "page-size" + "locationName" : "page-size", + "documentation" : "The number of entries you want on each page in the response." }, "Token" : { "shape" : "__string", "location" : "querystring", - "locationName" : "token" + "locationName" : "token", + "documentation" : "The NextToken string returned on a previous page that you use to get the next page of results in a paginated response." } }, "required" : [ "ApplicationId" ] @@ -3309,6 +4699,12 @@ "shape" : "ActivityResponse" } }, + "ListOfApplicationResponse" : { + "type" : "list", + "member" : { + "shape" : "ApplicationResponse" + } + }, "ListOfCampaignResponse" : { "type" : "list", "member" : { @@ -3369,6 +4765,24 @@ "shape" : "AttributeDimension" } }, + "MapOfEndpointMessageResult" : { + "type" : "map", + "key" : { + "shape" : "__string" + }, + "value" : { + "shape" : "EndpointMessageResult" + } + }, + "MapOfEndpointSendConfiguration" : { + "type" : "map", + "key" : { + "shape" : "__string" + }, + "value" : { + "shape" : "EndpointSendConfiguration" + } + }, "MapOfListOf__string" : { "type" : "map", "key" : { @@ -3378,6 +4792,15 @@ "shape" : "ListOf__string" } }, + "MapOfMapOfEndpointMessageResult" : { + "type" : "map", + "key" : { + "shape" : "__string" + }, + "value" : { + "shape" : "MapOfEndpointMessageResult" + } + }, "MapOfMessageResult" : { "type" : "map", "key" : { @@ -3445,6 +4868,10 @@ "shape" : "__string", "documentation" : "The URL that points to the media resource, for example a .mp4 or .gif file." }, + "RawContent" : { + "shape" : "__string", + "documentation" : "The Raw JSON formatted string to be used as the payload. This value overrides the message." + }, "SilentPush" : { "shape" : "__boolean", "documentation" : "Indicates if the message should display on the users device.\n\nSilent pushes can be used for Remote Configuration and Phone Home use cases. " @@ -3476,10 +4903,18 @@ "MessageConfiguration" : { "type" : "structure", "members" : { + "ADMMessage" : { + "shape" : "Message", + "documentation" : "The message that the campaign delivers to ADM channels. Overrides the default message." + }, "APNSMessage" : { "shape" : "Message", "documentation" : "The message that the campaign delivers to APNS channels. Overrides the default message." }, + "BaiduMessage" : { + "shape" : "Message", + "documentation" : "The message that the campaign delivers to Baidu channels. Overrides the default message." + }, "DefaultMessage" : { "shape" : "Message", "documentation" : "The default message for all channels." @@ -3510,6 +4945,10 @@ "shape" : "MapOf__string", "documentation" : "A map of custom attributes to attributes to be attached to the message. This payload is added to the push notification's 'data.pinpoint' object or added to the email/sms delivery receipt event attributes." }, + "Endpoints" : { + "shape" : "MapOfEndpointSendConfiguration", + "documentation" : "A map of destination addresses, with the address as the key(Email address, phone number or push token) and the Address Configuration as the value." + }, "MessageConfiguration" : { "shape" : "DirectMessageConfiguration", "documentation" : "Message configuration." @@ -3524,6 +4963,10 @@ "shape" : "__string", "documentation" : "Application id of the message." }, + "EndpointResult" : { + "shape" : "MapOfEndpointMessageResult", + "documentation" : "A map containing a multi part response for each address, with the endpointId as the key and the result as the value." + }, "RequestId" : { "shape" : "__string", "documentation" : "Original request Id for which this message was delivered." @@ -3604,16 +5047,15 @@ "shape" : "__string", "location" : "uri", "locationName" : "application-id", - "documentation": "ApplicationId" + "documentation" : "ApplicationId" }, "WriteEventStream" : { "shape" : "WriteEventStream", - "documentation": "EventStream to write." + "documentation" : "EventStream to write." } }, "required" : [ "ApplicationId", "WriteEventStream" ], - "payload" : "WriteEventStream", - "documentation": "PutEventStream Request" + "payload" : "WriteEventStream" }, "PutEventStreamResponse" : { "type" : "structure", @@ -3623,8 +5065,7 @@ } }, "required" : [ "EventStream" ], - "payload" : "EventStream", - "documentation": "PutEventStream Response" + "payload" : "EventStream" }, "QuietTime" : { "type" : "structure", @@ -3668,6 +5109,10 @@ "SenderId" : { "shape" : "__string", "documentation" : "Sender identifier of your messages." + }, + "ShortCode" : { + "shape" : "__string", + "documentation" : "ShortCode registered with phone provider." } }, "documentation" : "SMS Channel Request" @@ -3677,7 +5122,7 @@ "members" : { "ApplicationId" : { "shape" : "__string", - "documentation" : "Application id" + "documentation" : "The unique ID of the application to which the SMS channel belongs." }, "CreationDate" : { "shape" : "__string", @@ -3687,6 +5132,10 @@ "shape" : "__boolean", "documentation" : "If the channel is enabled for sending messages." }, + "HasCredential" : { + "shape" : "__boolean", + "documentation" : "If the channel is registered with a credential for authentication." + }, "Id" : { "shape" : "__string", "documentation" : "Channel ID. Not used, only for backwards compatibility." @@ -3965,6 +5414,67 @@ "required" : [ "MessageResponse" ], "payload" : "MessageResponse" }, + "SendUsersMessageRequest" : { + "type" : "structure", + "members" : { + "Context" : { + "shape" : "MapOf__string", + "documentation" : "A map of custom attributes to attributes to be attached to the message. This payload is added to the push notification's 'data.pinpoint' object or added to the email/sms delivery receipt event attributes." + }, + "MessageConfiguration" : { + "shape" : "DirectMessageConfiguration", + "documentation" : "Message configuration." + }, + "Users" : { + "shape" : "MapOfEndpointSendConfiguration", + "documentation" : "A map of destination endpoints, with the EndpointId as the key Endpoint Message Configuration as the value." + } + }, + "documentation" : "Send message request." + }, + "SendUsersMessageResponse" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "documentation" : "Application id of the message." + }, + "RequestId" : { + "shape" : "__string", + "documentation" : "Original request Id for which this message was delivered." + }, + "Result" : { + "shape" : "MapOfMapOfEndpointMessageResult", + "documentation" : "A map containing of UserId to Map of EndpointId to Endpoint Message Result." + } + }, + "documentation" : "User send message response." + }, + "SendUsersMessagesRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + }, + "SendUsersMessageRequest" : { + "shape" : "SendUsersMessageRequest" + } + }, + "required" : [ "ApplicationId", "SendUsersMessageRequest" ], + "payload" : "SendUsersMessageRequest" + }, + "SendUsersMessagesResponse" : { + "type" : "structure", + "members" : { + "SendUsersMessageResponse" : { + "shape" : "SendUsersMessageResponse" + } + }, + "required" : [ "SendUsersMessageResponse" ], + "payload" : "SendUsersMessageResponse" + }, "SetDimension" : { "type" : "structure", "members" : { @@ -4031,6 +5541,31 @@ }, "documentation" : "Treatment resource" }, + "UpdateAdmChannelRequest" : { + "type" : "structure", + "members" : { + "ADMChannelRequest" : { + "shape" : "ADMChannelRequest" + }, + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId", "ADMChannelRequest" ], + "payload" : "ADMChannelRequest" + }, + "UpdateAdmChannelResponse" : { + "type" : "structure", + "members" : { + "ADMChannelResponse" : { + "shape" : "ADMChannelResponse" + } + }, + "required" : [ "ADMChannelResponse" ], + "payload" : "ADMChannelResponse" + }, "UpdateApnsChannelRequest" : { "type" : "structure", "members" : { @@ -4081,6 +5616,56 @@ "required" : [ "APNSSandboxChannelResponse" ], "payload" : "APNSSandboxChannelResponse" }, + "UpdateApnsVoipChannelRequest" : { + "type" : "structure", + "members" : { + "APNSVoipChannelRequest" : { + "shape" : "APNSVoipChannelRequest" + }, + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId", "APNSVoipChannelRequest" ], + "payload" : "APNSVoipChannelRequest" + }, + "UpdateApnsVoipChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipChannelResponse" : { + "shape" : "APNSVoipChannelResponse" + } + }, + "required" : [ "APNSVoipChannelResponse" ], + "payload" : "APNSVoipChannelResponse" + }, + "UpdateApnsVoipSandboxChannelRequest" : { + "type" : "structure", + "members" : { + "APNSVoipSandboxChannelRequest" : { + "shape" : "APNSVoipSandboxChannelRequest" + }, + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + } + }, + "required" : [ "ApplicationId", "APNSVoipSandboxChannelRequest" ], + "payload" : "APNSVoipSandboxChannelRequest" + }, + "UpdateApnsVoipSandboxChannelResponse" : { + "type" : "structure", + "members" : { + "APNSVoipSandboxChannelResponse" : { + "shape" : "APNSVoipSandboxChannelResponse" + } + }, + "required" : [ "APNSVoipSandboxChannelResponse" ], + "payload" : "APNSVoipSandboxChannelResponse" + }, "UpdateApplicationSettingsRequest" : { "type" : "structure", "members" : { @@ -4106,6 +5691,31 @@ "required" : [ "ApplicationSettingsResource" ], "payload" : "ApplicationSettingsResource" }, + "UpdateBaiduChannelRequest" : { + "type" : "structure", + "members" : { + "ApplicationId" : { + "shape" : "__string", + "location" : "uri", + "locationName" : "application-id" + }, + "BaiduChannelRequest" : { + "shape" : "BaiduChannelRequest" + } + }, + "required" : [ "ApplicationId", "BaiduChannelRequest" ], + "payload" : "BaiduChannelRequest" + }, + "UpdateBaiduChannelResponse" : { + "type" : "structure", + "members" : { + "BaiduChannelResponse" : { + "shape" : "BaiduChannelResponse" + } + }, + "required" : [ "BaiduChannelResponse" ], + "payload" : "BaiduChannelResponse" + }, "UpdateCampaignRequest" : { "type" : "structure", "members" : { @@ -4371,10 +5981,6 @@ "shape" : "__string", "documentation" : "The Amazon Resource Name (ARN) of the Amazon Kinesis stream or Firehose delivery stream to which you want to publish events.\n Firehose ARN: arn:aws:firehose:REGION:ACCOUNT_ID:deliverystream/STREAM_NAME\n Kinesis ARN: arn:aws:kinesis:REGION:ACCOUNT_ID:stream/STREAM_NAME" }, - "ExternalId" : { - "shape" : "__string", - "documentation" : "The external ID assigned the IAM role that authorizes Amazon Pinpoint to publish to the stream." - }, "RoleArn" : { "shape" : "__string", "documentation" : "The IAM role that authorizes Amazon Pinpoint to publish events to the stream in your account." @@ -4438,4 +6044,4 @@ "type" : "timestamp" } } -} \ No newline at end of file +} diff --git a/services/polly/src/main/resources/codegen-resources/service-2.json b/services/polly/src/main/resources/codegen-resources/service-2.json index ae3279a25b53..dd534e34a8bd 100755 --- a/services/polly/src/main/resources/codegen-resources/service-2.json +++ b/services/polly/src/main/resources/codegen-resources/service-2.json @@ -5,6 +5,7 @@ "endpointPrefix":"polly", "protocol":"rest-json", "serviceFullName":"Amazon Polly", + "serviceId":"Polly", "signatureVersion":"v4", "uid":"polly-2016-06-10" }, @@ -250,6 +251,7 @@ "fr-FR", "is-IS", "it-IT", + "ko-KR", "ja-JP", "nb-NO", "nl-NL", @@ -620,6 +622,7 @@ "Justin", "Kendra", "Kimberly", + "Matthew", "Salli", "Conchita", "Enrique", @@ -649,7 +652,10 @@ "Tatyana", "Astrid", "Filiz", - "Vicki" + "Vicki", + "Takumi", + "Seoyeon", + "Aditi" ] }, "VoiceList":{ diff --git a/services/rds/src/main/resources/codegen-resources/service-2.json b/services/rds/src/main/resources/codegen-resources/service-2.json index ccb180e53313..da8055da3140 100644 --- a/services/rds/src/main/resources/codegen-resources/service-2.json +++ b/services/rds/src/main/resources/codegen-resources/service-2.json @@ -6,6 +6,7 @@ "protocol":"query", "serviceAbbreviation":"Amazon RDS", "serviceFullName":"Amazon Relational Database Service", + "serviceId":"RDS", "signatureVersion":"v4", "uid":"rds-2014-10-31", "xmlNamespace":"http://rds.amazonaws.com/doc/2014-10-31/" @@ -90,7 +91,7 @@ {"shape":"AuthorizationAlreadyExistsFault"}, {"shape":"AuthorizationQuotaExceededFault"} ], - "documentation":"

Enables ingress to a DBSecurityGroup using one of two forms of authorization. First, EC2 or VPC security groups can be added to the DBSecurityGroup if the application using the database is running on EC2 or VPC instances. Second, IP ranges are available if the application accessing your database is running on the Internet. Required parameters for this API are one of CIDR range, EC2SecurityGroupId for VPC, or (EC2SecurityGroupOwnerId and either EC2SecurityGroupName or EC2SecurityGroupId for non-VPC).

You cannot authorize ingress from an EC2 security group in one region to an Amazon RDS DB instance in another. You cannot authorize ingress from a VPC security group in one VPC to an Amazon RDS DB instance in another.

For an overview of CIDR ranges, go to the Wikipedia Tutorial.

" + "documentation":"

Enables ingress to a DBSecurityGroup using one of two forms of authorization. First, EC2 or VPC security groups can be added to the DBSecurityGroup if the application using the database is running on EC2 or VPC instances. Second, IP ranges are available if the application accessing your database is running on the Internet. Required parameters for this API are one of CIDR range, EC2SecurityGroupId for VPC, or (EC2SecurityGroupOwnerId and either EC2SecurityGroupName or EC2SecurityGroupId for non-VPC).

You can't authorize ingress from an EC2 security group in one AWS Region to an Amazon RDS DB instance in another. You can't authorize ingress from a VPC security group in one VPC to an Amazon RDS DB instance in another.

For an overview of CIDR ranges, go to the Wikipedia Tutorial.

" }, "CopyDBClusterParameterGroup":{ "name":"CopyDBClusterParameterGroup", @@ -129,7 +130,7 @@ {"shape":"SnapshotQuotaExceededFault"}, {"shape":"KMSKeyNotAccessibleFault"} ], - "documentation":"

Copies a snapshot of a DB cluster.

To copy a DB cluster snapshot from a shared manual DB cluster snapshot, SourceDBClusterSnapshotIdentifier must be the Amazon Resource Name (ARN) of the shared DB cluster snapshot.

You can copy an encrypted DB cluster snapshot from another AWS region. In that case, the region where you call the CopyDBClusterSnapshot action is the destination region for the encrypted DB cluster snapshot to be copied to. To copy an encrypted DB cluster snapshot from another region, you must provide the following values:

To cancel the copy operation once it is in progress, delete the target DB cluster snapshot identified by TargetDBClusterSnapshotIdentifier while that DB cluster snapshot is in \"copying\" status.

For more information on copying encrypted DB cluster snapshots from one region to another, see Copying a DB Cluster Snapshot in the Same Account, Either in the Same Region or Across Regions in the Amazon RDS User Guide.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" + "documentation":"

Copies a snapshot of a DB cluster.

To copy a DB cluster snapshot from a shared manual DB cluster snapshot, SourceDBClusterSnapshotIdentifier must be the Amazon Resource Name (ARN) of the shared DB cluster snapshot.

You can copy an encrypted DB cluster snapshot from another AWS Region. In that case, the AWS Region where you call the CopyDBClusterSnapshot action is the destination AWS Region for the encrypted DB cluster snapshot to be copied to. To copy an encrypted DB cluster snapshot from another AWS Region, you must provide the following values:

To cancel the copy operation once it is in progress, delete the target DB cluster snapshot identified by TargetDBClusterSnapshotIdentifier while that DB cluster snapshot is in \"copying\" status.

For more information on copying encrypted DB cluster snapshots from one AWS Region to another, see Copying a DB Cluster Snapshot in the Same Account, Either in the Same Region or Across Regions in the Amazon RDS User Guide.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" }, "CopyDBParameterGroup":{ "name":"CopyDBParameterGroup", @@ -167,7 +168,7 @@ {"shape":"SnapshotQuotaExceededFault"}, {"shape":"KMSKeyNotAccessibleFault"} ], - "documentation":"

Copies the specified DB snapshot. The source DB snapshot must be in the \"available\" state.

You can copy a snapshot from one AWS region to another. In that case, the region where you call the CopyDBSnapshot action is the destination region for the DB snapshot copy.

You cannot copy an encrypted, shared DB snapshot from one AWS region to another.

For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide.

" + "documentation":"

Copies the specified DB snapshot. The source DB snapshot must be in the \"available\" state.

You can copy a snapshot from one AWS Region to another. In that case, the AWS Region where you call the CopyDBSnapshot action is the destination AWS Region for the DB snapshot copy.

You can't copy an encrypted, shared DB snapshot from one AWS Region to another.

For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide.

" }, "CopyOptionGroup":{ "name":"CopyOptionGroup", @@ -318,7 +319,7 @@ {"shape":"StorageTypeNotSupportedFault"}, {"shape":"KMSKeyNotAccessibleFault"} ], - "documentation":"

Creates a DB instance for a DB instance running MySQL, MariaDB, or PostgreSQL that acts as a Read Replica of a source DB instance.

Amazon Aurora does not support this action. You must call the CreateDBInstance action to create a DB instance for an Aurora DB cluster.

All Read Replica DB instances are created as Single-AZ deployments with backups disabled. All other DB instance attributes (including DB security groups and DB parameter groups) are inherited from the source DB instance, except as specified below.

The source DB instance must have backup retention enabled.

You can create an encrypted Read Replica in a different AWS Region than the source DB instance. In that case, the region where you call the CreateDBInstanceReadReplica action is the destination region of the encrypted Read Replica. The source DB instance must be encrypted.

To create an encrypted Read Replica in another AWS Region, you must provide the following values:

" + "documentation":"

Creates a new DB instance that acts as a Read Replica for an existing source DB instance. You can create a Read Replica for a DB instance running MySQL, MariaDB, or PostgreSQL.

Amazon Aurora does not support this action. You must call the CreateDBInstance action to create a DB instance for an Aurora DB cluster.

All Read Replica DB instances are created as Single-AZ deployments with backups disabled. All other DB instance attributes (including DB security groups and DB parameter groups) are inherited from the source DB instance, except as specified below.

The source DB instance must have backup retention enabled.

For more information, see Working with PostgreSQL, MySQL, and MariaDB Read Replicas.

" }, "CreateDBParameterGroup":{ "name":"CreateDBParameterGroup", @@ -392,7 +393,7 @@ {"shape":"DBSubnetGroupDoesNotCoverEnoughAZs"}, {"shape":"InvalidSubnet"} ], - "documentation":"

Creates a new DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the region.

" + "documentation":"

Creates a new DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.

" }, "CreateEventSubscription":{ "name":"CreateEventSubscription", @@ -414,7 +415,7 @@ {"shape":"SubscriptionCategoryNotFoundFault"}, {"shape":"SourceNotFoundFault"} ], - "documentation":"

Creates an RDS event notification subscription. This action requires a topic ARN (Amazon Resource Name) created by either the RDS console, the SNS console, or the SNS API. To obtain an ARN with SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console.

You can specify the type of source (SourceType) you want to be notified of, provide a list of RDS sources (SourceIds) that triggers the events, and provide a list of event categories (EventCategories) for events you want to be notified of. For example, you can specify SourceType = db-instance, SourceIds = mydbinstance1, mydbinstance2 and EventCategories = Availability, Backup.

If you specify both the SourceType and SourceIds, such as SourceType = db-instance and SourceIdentifier = myDBInstance1, you will be notified of all the db-instance events for the specified source. If you specify a SourceType but do not specify a SourceIdentifier, you will receive notice of the events for that source type for all your RDS sources. If you do not specify either the SourceType nor the SourceIdentifier, you will be notified of events generated from all RDS sources belonging to your customer account.

" + "documentation":"

Creates an RDS event notification subscription. This action requires a topic ARN (Amazon Resource Name) created by either the RDS console, the SNS console, or the SNS API. To obtain an ARN with SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console.

You can specify the type of source (SourceType) you want to be notified of, provide a list of RDS sources (SourceIds) that triggers the events, and provide a list of event categories (EventCategories) for events you want to be notified of. For example, you can specify SourceType = db-instance, SourceIds = mydbinstance1, mydbinstance2 and EventCategories = Availability, Backup.

If you specify both the SourceType and SourceIds, such as SourceType = db-instance and SourceIdentifier = myDBInstance1, you are notified of all the db-instance events for the specified source. If you specify a SourceType but do not specify a SourceIdentifier, you receive notice of the events for that source type for all your RDS sources. If you do not specify either the SourceType nor the SourceIdentifier, you are notified of events generated from all RDS sources belonging to your customer account.

" }, "CreateOptionGroup":{ "name":"CreateOptionGroup", @@ -451,7 +452,7 @@ {"shape":"SnapshotQuotaExceededFault"}, {"shape":"InvalidDBClusterSnapshotStateFault"} ], - "documentation":"

The DeleteDBCluster action deletes a previously provisioned DB cluster. When you delete a DB cluster, all automated backups for that DB cluster are deleted and cannot be recovered. Manual DB cluster snapshots of the specified DB cluster are not deleted.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" + "documentation":"

The DeleteDBCluster action deletes a previously provisioned DB cluster. When you delete a DB cluster, all automated backups for that DB cluster are deleted and can't be recovered. Manual DB cluster snapshots of the specified DB cluster are not deleted.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" }, "DeleteDBClusterParameterGroup":{ "name":"DeleteDBClusterParameterGroup", @@ -464,7 +465,7 @@ {"shape":"InvalidDBParameterGroupStateFault"}, {"shape":"DBParameterGroupNotFoundFault"} ], - "documentation":"

Deletes a specified DB cluster parameter group. The DB cluster parameter group to be deleted cannot be associated with any DB clusters.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" + "documentation":"

Deletes a specified DB cluster parameter group. The DB cluster parameter group to be deleted can't be associated with any DB clusters.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" }, "DeleteDBClusterSnapshot":{ "name":"DeleteDBClusterSnapshot", @@ -501,7 +502,7 @@ {"shape":"SnapshotQuotaExceededFault"}, {"shape":"InvalidDBClusterStateFault"} ], - "documentation":"

The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and cannot be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance are not deleted.

If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting until the DB snapshot is created. The API action DescribeDBInstance is used to monitor the status of this operation. The action cannot be canceled or reverted once submitted.

Note that when a DB instance is in a failure state and has a status of failed, incompatible-restore, or incompatible-network, you can only delete it when the SkipFinalSnapshot parameter is set to true.

If the specified DB instance is part of an Amazon Aurora DB cluster, you cannot delete the DB instance if the following are true:

To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster API action to promote the DB cluster so it's no longer a Read Replica. After the promotion completes, then call the DeleteDBInstance API action to delete the final instance in the DB cluster.

" + "documentation":"

The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance are not deleted.

If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting until the DB snapshot is created. The API action DescribeDBInstance is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.

Note that when a DB instance is in a failure state and has a status of failed, incompatible-restore, or incompatible-network, you can only delete it when the SkipFinalSnapshot parameter is set to true.

If the specified DB instance is part of an Amazon Aurora DB cluster, you can't delete the DB instance if both of the following conditions are true:

To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster API action to promote the DB cluster so it's no longer a Read Replica. After the promotion completes, then call the DeleteDBInstance API action to delete the final instance in the DB cluster.

" }, "DeleteDBParameterGroup":{ "name":"DeleteDBParameterGroup", @@ -514,7 +515,7 @@ {"shape":"InvalidDBParameterGroupStateFault"}, {"shape":"DBParameterGroupNotFoundFault"} ], - "documentation":"

Deletes a specified DBParameterGroup. The DBParameterGroup to be deleted cannot be associated with any DB instances.

" + "documentation":"

Deletes a specified DBParameterGroup. The DBParameterGroup to be deleted can't be associated with any DB instances.

" }, "DeleteDBSecurityGroup":{ "name":"DeleteDBSecurityGroup", @@ -1009,7 +1010,24 @@ "shape":"SourceRegionMessage", "resultWrapper":"DescribeSourceRegionsResult" }, - "documentation":"

Returns a list of the source AWS regions where the current AWS region can create a Read Replica or copy a DB snapshot from. This API action supports pagination.

" + "documentation":"

Returns a list of the source AWS Regions where the current AWS Region can create a Read Replica or copy a DB snapshot from. This API action supports pagination.

" + }, + "DescribeValidDBInstanceModifications":{ + "name":"DescribeValidDBInstanceModifications", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeValidDBInstanceModificationsMessage"}, + "output":{ + "shape":"DescribeValidDBInstanceModificationsResult", + "resultWrapper":"DescribeValidDBInstanceModificationsResult" + }, + "errors":[ + {"shape":"DBInstanceNotFoundFault"}, + {"shape":"InvalidDBInstanceStateFault"} + ], + "documentation":"

You can call DescribeValidDBInstanceModifications to learn what modifications you can make to your DB instance. You can use this information when you call ModifyDBInstance.

" }, "DownloadDBLogFilePortion":{ "name":"DownloadDBLogFilePortion", @@ -1154,7 +1172,7 @@ {"shape":"CertificateNotFoundFault"}, {"shape":"DomainNotFoundFault"} ], - "documentation":"

Modifies settings for a DB instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request.

" + "documentation":"

Modifies settings for a DB instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. To learn what modifications you can make to your DB instance, call DescribeValidDBInstanceModifications before you call ModifyDBInstance.

" }, "ModifyDBParameterGroup":{ "name":"ModifyDBParameterGroup", @@ -1187,7 +1205,7 @@ "errors":[ {"shape":"DBSnapshotNotFoundFault"} ], - "documentation":"

Updates a manual DB snapshot, which can be encrypted or not encrypted, with a new engine version. You can update the engine version to either a new major or minor engine version.

Amazon RDS supports upgrading a MySQL DB snapshot from MySQL 5.1 to MySQL 5.5.

" + "documentation":"

Updates a manual DB snapshot, which can be encrypted or not encrypted, with a new engine version.

Amazon RDS supports upgrading DB snapshots for MySQL and Oracle.

" }, "ModifyDBSnapshotAttribute":{ "name":"ModifyDBSnapshotAttribute", @@ -1225,7 +1243,7 @@ {"shape":"DBSubnetGroupDoesNotCoverEnoughAZs"}, {"shape":"InvalidSubnet"} ], - "documentation":"

Modifies an existing DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the region.

" + "documentation":"

Modifies an existing DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.

" }, "ModifyEventSubscription":{ "name":"ModifyEventSubscription", @@ -1246,7 +1264,7 @@ {"shape":"SNSTopicArnNotFoundFault"}, {"shape":"SubscriptionCategoryNotFoundFault"} ], - "documentation":"

Modifies an existing RDS event notification subscription. Note that you cannot modify the source identifiers using this call; to change source identifiers for a subscription, use the AddSourceIdentifierToSubscription and RemoveSourceIdentifierFromSubscription calls.

You can see a list of the event categories for a given SourceType in the Events topic in the Amazon RDS User Guide or by using the DescribeEventCategories action.

" + "documentation":"

Modifies an existing RDS event notification subscription. Note that you can't modify the source identifiers using this call; to change source identifiers for a subscription, use the AddSourceIdentifierToSubscription and RemoveSourceIdentifierFromSubscription calls.

You can see a list of the event categories for a given SourceType in the Events topic in the Amazon RDS User Guide or by using the DescribeEventCategories action.

" }, "ModifyOptionGroup":{ "name":"ModifyOptionGroup", @@ -1332,7 +1350,7 @@ {"shape":"InvalidDBInstanceStateFault"}, {"shape":"DBInstanceNotFoundFault"} ], - "documentation":"

Rebooting a DB instance restarts the database engine service. A reboot also applies to the DB instance any modifications to the associated DB parameter group that were pending. Rebooting a DB instance results in a momentary outage of the instance, during which the DB instance status is set to rebooting. If the RDS instance is configured for MultiAZ, it is possible that the reboot will be conducted through a failover. An Amazon RDS event is created when the reboot is completed.

If your DB instance is deployed in multiple Availability Zones, you can force a failover from one AZ to the other during the reboot. You might force a failover to test the availability of your DB instance deployment or to restore operations to the original AZ after a failover occurs.

The time required to reboot is a function of the specific database engine's crash recovery process. To improve the reboot time, we recommend that you reduce database activities as much as possible during the reboot process to reduce rollback activity for in-transit transactions.

" + "documentation":"

Rebooting a DB instance restarts the database engine service. A reboot also applies to the DB instance any modifications to the associated DB parameter group that were pending. Rebooting a DB instance results in a momentary outage of the instance, during which the DB instance status is set to rebooting. If the RDS instance is configured for MultiAZ, it is possible that the reboot is conducted through a failover. An Amazon RDS event is created when the reboot is completed.

If your DB instance is deployed in multiple Availability Zones, you can force a failover from one AZ to the other during the reboot. You might force a failover to test the availability of your DB instance deployment or to restore operations to the original AZ after a failover occurs.

The time required to reboot is a function of the specific database engine's crash recovery process. To improve the reboot time, we recommend that you reduce database activities as much as possible during the reboot process to reduce rollback activity for in-transit transactions.

" }, "RemoveRoleFromDBCluster":{ "name":"RemoveRoleFromDBCluster", @@ -1411,7 +1429,7 @@ {"shape":"InvalidDBParameterGroupStateFault"}, {"shape":"DBParameterGroupNotFoundFault"} ], - "documentation":"

Modifies the parameters of a DB parameter group to the engine/system default value. To reset specific parameters, provide a list of the following: ParameterName and ApplyMethod. To reset the entire DB parameter group, specify the DBParameterGroup name and ResetAllParameters parameters. When resetting the entire group, dynamic parameters are updated immediately and static parameters are set to pending-reboot to take effect on the next DB instance restart or RebootDBInstance request.

" + "documentation":"

Modifies the parameters of a DB parameter group to the engine/system default value. To reset specific parameters, provide a list of the following: ParameterName and ApplyMethod. To reset the entire DB parameter group, specify the DBParameterGroup name and ResetAllParameters parameters. When resetting the entire group, dynamic parameters are updated immediately and static parameters are set to pending-reboot to take effect on the next DB instance restart or RebootDBInstance request.

" }, "RestoreDBClusterFromS3":{ "name":"RestoreDBClusterFromS3", @@ -1471,7 +1489,7 @@ {"shape":"OptionGroupNotFoundFault"}, {"shape":"KMSKeyNotAccessibleFault"} ], - "documentation":"

Creates a new DB cluster from a DB cluster snapshot. The target DB cluster is created from the source DB cluster restore point with the same configuration as the original source DB cluster, except that the new DB cluster is created with the default security group.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" + "documentation":"

Creates a new DB cluster from a DB snapshot or DB cluster snapshot.

If a DB snapshot is specified, the target DB cluster is created from the source DB snapshot with a default configuration and default security group.

If a DB cluster snapshot is specified, the target DB cluster is created from the source DB cluster restore point with the same configuration as the original source DB cluster, except that the new DB cluster is created with the default security group.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" }, "RestoreDBClusterToPointInTime":{ "name":"RestoreDBClusterToPointInTime", @@ -1502,7 +1520,7 @@ {"shape":"OptionGroupNotFoundFault"}, {"shape":"StorageQuotaExceededFault"} ], - "documentation":"

Restores a DB cluster to an arbitrary point in time. Users can restore to any point in time before LatestRestorableTime for up to BackupRetentionPeriod days. The target DB cluster is created from the source DB cluster with the same configuration as the original DB cluster, except that the new DB cluster is created with the default DB security group.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" + "documentation":"

Restores a DB cluster to an arbitrary point in time. Users can restore to any point in time before LatestRestorableTime for up to BackupRetentionPeriod days. The target DB cluster is created from the source DB cluster with the same configuration as the original DB cluster, except that the new DB cluster is created with the default DB security group.

This action only restores the DB cluster, not the DB instances for that DB cluster. You must invoke the CreateDBInstance action to create DB instances for the restored DB cluster, specifying the identifier of the restored DB cluster in DBClusterIdentifier. You can create DB instances only after the RestoreDBClusterToPointInTime action has completed and the DB cluster is available.

For more information on Amazon Aurora, see Aurora on Amazon RDS in the Amazon RDS User Guide.

" }, "RestoreDBInstanceFromDBSnapshot":{ "name":"RestoreDBInstanceFromDBSnapshot", @@ -1537,6 +1555,37 @@ ], "documentation":"

Creates a new DB instance from a DB snapshot. The target database is created from the source database restore point with the most of original configuration with the default security group and the default DB parameter group. By default, the new DB instance is created as a single-AZ deployment except when the instance is a SQL Server instance that has an option group that is associated with mirroring; in this case, the instance becomes a mirrored AZ deployment and not a single-AZ deployment.

If your intent is to replace your original DB instance with the new, restored DB instance, then rename your original DB instance before you call the RestoreDBInstanceFromDBSnapshot action. RDS does not allow two DB instances with the same name. Once you have renamed your original DB instance with a different identifier, then you can pass the original name of the DB instance as the DBInstanceIdentifier in the call to the RestoreDBInstanceFromDBSnapshot action. The result is that you will replace the original DB instance with the DB instance created from the snapshot.

If you are restoring from a shared manual DB snapshot, the DBSnapshotIdentifier must be the ARN of the shared DB snapshot.

" }, + "RestoreDBInstanceFromS3":{ + "name":"RestoreDBInstanceFromS3", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"RestoreDBInstanceFromS3Message"}, + "output":{ + "shape":"RestoreDBInstanceFromS3Result", + "resultWrapper":"RestoreDBInstanceFromS3Result" + }, + "errors":[ + {"shape":"DBInstanceAlreadyExistsFault"}, + {"shape":"InsufficientDBInstanceCapacityFault"}, + {"shape":"DBParameterGroupNotFoundFault"}, + {"shape":"DBSecurityGroupNotFoundFault"}, + {"shape":"InstanceQuotaExceededFault"}, + {"shape":"StorageQuotaExceededFault"}, + {"shape":"DBSubnetGroupNotFoundFault"}, + {"shape":"DBSubnetGroupDoesNotCoverEnoughAZs"}, + {"shape":"InvalidSubnet"}, + {"shape":"InvalidVPCNetworkStateFault"}, + {"shape":"InvalidS3BucketFault"}, + {"shape":"ProvisionedIopsNotAvailableInAZFault"}, + {"shape":"OptionGroupNotFoundFault"}, + {"shape":"StorageTypeNotSupportedFault"}, + {"shape":"AuthorizationNotFoundFault"}, + {"shape":"KMSKeyNotAccessibleFault"} + ], + "documentation":"

Amazon Relational Database Service (Amazon RDS) supports importing MySQL databases by using backup files. You can create a backup of your on-premises database, store it on Amazon Simple Storage Service (Amazon S3), and then restore the backup file onto a new Amazon RDS DB instance running MySQL. For more information, see Importing Data into an Amazon RDS MySQL DB Instance.

" + }, "RestoreDBInstanceToPointInTime":{ "name":"RestoreDBInstanceToPointInTime", "http":{ @@ -1703,7 +1752,7 @@ }, "SourceIdentifier":{ "shape":"String", - "documentation":"

The identifier of the event source to be added. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it cannot end with a hyphen or contain two consecutive hyphens.

Constraints:

" + "documentation":"

The identifier of the event source to be added.

Constraints:

" } }, "documentation":"

" @@ -1723,7 +1772,7 @@ "members":{ "ResourceName":{ "shape":"String", - "documentation":"

The Amazon RDS resource the tags will be added to. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

" + "documentation":"

The Amazon RDS resource that the tags are added to. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

" }, "Tags":{ "shape":"TagList", @@ -1757,7 +1806,7 @@ }, "OptInType":{ "shape":"String", - "documentation":"

A value that specifies the type of opt-in request, or undoes an opt-in request. An opt-in request of type immediate cannot be undone.

Valid values:

" + "documentation":"

A value that specifies the type of opt-in request, or undoes an opt-in request. An opt-in request of type immediate can't be undone.

Valid values:

" } }, "documentation":"

" @@ -1959,11 +2008,11 @@ "members":{ "SourceDBClusterParameterGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier or Amazon Resource Name (ARN) for the source DB cluster parameter group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" + "documentation":"

The identifier or Amazon Resource Name (ARN) for the source DB cluster parameter group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" }, "TargetDBClusterParameterGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier for the copied DB cluster parameter group.

Constraints:

Example: my-cluster-param-group1

" + "documentation":"

The identifier for the copied DB cluster parameter group.

Constraints:

Example: my-cluster-param-group1

" }, "TargetDBClusterParameterGroupDescription":{ "shape":"String", @@ -1987,23 +2036,23 @@ "members":{ "SourceDBClusterSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier of the DB cluster snapshot to copy. This parameter is not case-sensitive.

You cannot copy an encrypted, shared DB cluster snapshot from one AWS region to another.

Constraints:

Example: my-cluster-snapshot1

" + "documentation":"

The identifier of the DB cluster snapshot to copy. This parameter is not case-sensitive.

You can't copy an encrypted, shared DB cluster snapshot from one AWS Region to another.

Constraints:

Example: my-cluster-snapshot1

" }, "TargetDBClusterSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier of the new DB cluster snapshot to create from the source DB cluster snapshot. This parameter is not case-sensitive.

Constraints:

Example: my-cluster-snapshot2

" + "documentation":"

The identifier of the new DB cluster snapshot to create from the source DB cluster snapshot. This parameter is not case-sensitive.

Constraints:

Example: my-cluster-snapshot2

" }, "KmsKeyId":{ "shape":"String", - "documentation":"

The AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId parameter, Amazon RDS encrypts the target DB cluster snapshot using the specified KMS encryption key.

If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.

If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId.

To copy an encrypted DB cluster snapshot to another region, you must set KmsKeyId to the KMS key ID you want to use to encrypt the copy of the DB cluster snapshot in the destination region. KMS encryption keys are specific to the region that they are created in, and you cannot use encryption keys from one region in another region.

" + "documentation":"

The AWS AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId parameter, Amazon RDS encrypts the target DB cluster snapshot using the specified KMS encryption key.

If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.

If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId.

To copy an encrypted DB cluster snapshot to another AWS Region, you must set KmsKeyId to the KMS key ID you want to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.

" }, "PreSignedUrl":{ "shape":"String", - "documentation":"

The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot API action in the AWS region that contains the source DB cluster snapshot to copy. The PreSignedUrl parameter must be used when copying an encrypted DB cluster snapshot from another AWS region.

The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot API action that can be executed in the source region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" + "documentation":"

The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot API action in the AWS Region that contains the source DB cluster snapshot to copy. The PreSignedUrl parameter must be used when copying an encrypted DB cluster snapshot from another AWS Region.

The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" }, "CopyTags":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the source DB cluster snapshot to the target DB cluster snapshot; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the source DB cluster snapshot to the target DB cluster snapshot, and otherwise false. The default is false.

" }, "Tags":{"shape":"TagList"} }, @@ -2025,11 +2074,11 @@ "members":{ "SourceDBParameterGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" + "documentation":"

The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" }, "TargetDBParameterGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier for the copied DB parameter group.

Constraints:

Example: my-db-parameter-group

" + "documentation":"

The identifier for the copied DB parameter group.

Constraints:

Example: my-db-parameter-group

" }, "TargetDBParameterGroupDescription":{ "shape":"String", @@ -2054,28 +2103,28 @@ "members":{ "SourceDBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier for the source DB snapshot.

If the source snapshot is in the same region as the copy, specify a valid DB snapshot identifier. For example, rds:mysql-instance1-snapshot-20130805.

If the source snapshot is in a different region than the copy, specify a valid DB snapshot ARN. For example, arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805.

If you are copying from a shared manual DB snapshot, this parameter must be the Amazon Resource Name (ARN) of the shared DB snapshot.

If you are copying an encrypted snapshot this parameter must be in the ARN format for the source region, and must match the SourceDBSnapshotIdentifier in the PreSignedUrl parameter.

Constraints:

Example: rds:mydb-2012-04-02-00-01

Example: arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805

" + "documentation":"

The identifier for the source DB snapshot.

If the source snapshot is in the same AWS Region as the copy, specify a valid DB snapshot identifier. For example, you might specify rds:mysql-instance1-snapshot-20130805.

If the source snapshot is in a different AWS Region than the copy, specify a valid DB snapshot ARN. For example, you might specify arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805.

If you are copying from a shared manual DB snapshot, this parameter must be the Amazon Resource Name (ARN) of the shared DB snapshot.

If you are copying an encrypted snapshot this parameter must be in the ARN format for the source AWS Region, and must match the SourceDBSnapshotIdentifier in the PreSignedUrl parameter.

Constraints:

Example: rds:mydb-2012-04-02-00-01

Example: arn:aws:rds:us-west-2:123456789012:snapshot:mysql-instance1-snapshot-20130805

" }, "TargetDBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier for the copy of the snapshot.

Constraints:

Example: my-db-snapshot

" + "documentation":"

The identifier for the copy of the snapshot.

Constraints:

Example: my-db-snapshot

" }, "KmsKeyId":{ "shape":"String", - "documentation":"

The AWS KMS key ID for an encrypted DB snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this parameter to encrypt the copy with a new KMS encryption key. If you don't specify a value for this parameter, then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.

If you copy an encrypted DB snapshot that is shared from another AWS account, then you must specify a value for this parameter.

If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.

If you copy an encrypted snapshot to a different AWS region, then you must specify a KMS key for the destination AWS region. KMS encryption keys are specific to the region that they are created in, and you cannot use encryption keys from one region in another region.

" + "documentation":"

The AWS KMS key ID for an encrypted DB snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you copy an encrypted DB snapshot from your AWS account, you can specify a value for this parameter to encrypt the copy with a new KMS encryption key. If you don't specify a value for this parameter, then the copy of the DB snapshot is encrypted with the same KMS key as the source DB snapshot.

If you copy an encrypted DB snapshot that is shared from another AWS account, then you must specify a value for this parameter.

If you specify this parameter when you copy an unencrypted snapshot, the copy is encrypted.

If you copy an encrypted snapshot to a different AWS Region, then you must specify a KMS key for the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.

" }, "Tags":{"shape":"TagList"}, "CopyTags":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the source DB snapshot to the target DB snapshot; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the source DB snapshot to the target DB snapshot, and otherwise false. The default is false.

" }, "PreSignedUrl":{ "shape":"String", - "documentation":"

The URL that contains a Signature Version 4 signed request for the CopyDBSnapshot API action in the source AWS region that contains the source DB snapshot to copy.

You must specify this parameter when you copy an encrypted DB snapshot from another AWS region by using the Amazon RDS API. You can specify the source region option instead of this parameter when you copy an encrypted DB snapshot from another AWS region by using the AWS CLI.

The presigned URL must be a valid request for the CopyDBSnapshot API action that can be executed in the source region that contains the encrypted DB snapshot to be copied. The presigned URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" + "documentation":"

The URL that contains a Signature Version 4 signed request for the CopyDBSnapshot API action in the source AWS Region that contains the source DB snapshot to copy.

You must specify this parameter when you copy an encrypted DB snapshot from another AWS Region by using the Amazon RDS API. You can specify the --source-region option instead of this parameter when you copy an encrypted DB snapshot from another AWS Region by using the AWS CLI.

The presigned URL must be a valid request for the CopyDBSnapshot API action that can be executed in the source AWS Region that contains the encrypted DB snapshot to be copied. The presigned URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" }, "OptionGroupName":{ "shape":"String", - "documentation":"

The name of an option group to associate with the copy.

Specify this option if you are copying a snapshot from one AWS region to another, and your DB instance uses a non-default option group. If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server, you must specify this option when copying across regions. For more information, see Option Group Considerations.

" + "documentation":"

The name of an option group to associate with the copy of the snapshot.

Specify this option if you are copying a snapshot from one AWS Region to another, and your DB instance uses a nondefault option group. If your source DB instance uses Transparent Data Encryption for Oracle or Microsoft SQL Server, you must specify this option when copying across AWS Regions. For more information, see Option Group Considerations.

" } }, "documentation":"

" @@ -2096,11 +2145,11 @@ "members":{ "SourceOptionGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier or ARN for the source option group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" + "documentation":"

The identifier or ARN for the source option group. For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

Constraints:

" }, "TargetOptionGroupIdentifier":{ "shape":"String", - "documentation":"

The identifier for the copied option group.

Constraints:

Example: my-option-group

" + "documentation":"

The identifier for the copied option group.

Constraints:

Example: my-option-group

" }, "TargetOptionGroupDescription":{ "shape":"String", @@ -2125,7 +2174,7 @@ "members":{ "AvailabilityZones":{ "shape":"AvailabilityZones", - "documentation":"

A list of EC2 Availability Zones that instances in the DB cluster can be created in. For information on regions and Availability Zones, see Regions and Availability Zones.

" + "documentation":"

A list of EC2 Availability Zones that instances in the DB cluster can be created in. For information on AWS Regions and Availability Zones, see Regions and Availability Zones.

" }, "BackupRetentionPeriod":{ "shape":"IntegerOptional", @@ -2141,11 +2190,11 @@ }, "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The DB cluster identifier. This parameter is stored as a lowercase string.

Constraints:

Example: my-cluster1

" + "documentation":"

The DB cluster identifier. This parameter is stored as a lowercase string.

Constraints:

Example: my-cluster1

" }, "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB cluster parameter group to associate with this DB cluster. If this argument is omitted, default.aurora5.6 will be used.

Constraints:

" + "documentation":"

The name of the DB cluster parameter group to associate with this DB cluster. If this argument is omitted, default.aurora5.6 is used.

Constraints:

" }, "VpcSecurityGroupIds":{ "shape":"VpcSecurityGroupIdList", @@ -2153,11 +2202,11 @@ }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

A DB subnet group to associate with this DB cluster.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

A DB subnet group to associate with this DB cluster.

Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.

Example: mySubnetgroup

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the database engine to be used for this DB cluster.

Valid Values: aurora

" + "documentation":"

The name of the database engine to be used for this DB cluster.

Valid Values: aurora, aurora-postgresql

" }, "EngineVersion":{ "shape":"String", @@ -2169,7 +2218,7 @@ }, "MasterUsername":{ "shape":"String", - "documentation":"

The name of the master user for the DB cluster.

Constraints:

" + "documentation":"

The name of the master user for the DB cluster.

Constraints:

" }, "MasterUserPassword":{ "shape":"String", @@ -2177,15 +2226,15 @@ }, "OptionGroupName":{ "shape":"String", - "documentation":"

A value that indicates that the DB cluster should be associated with the specified option group.

Permanent options cannot be removed from an option group. The option group cannot be removed from a DB cluster once it is associated with a DB cluster.

" + "documentation":"

A value that indicates that the DB cluster should be associated with the specified option group.

Permanent options can't be removed from an option group. The option group can't be removed from a DB cluster once it is associated with a DB cluster.

" }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.

Default: A 30-minute window selected at random from an 8-hour block of time per region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun

Constraints: Minimum 30-minute window.

" + "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.

Constraints: Minimum 30-minute window.

" }, "ReplicationSourceIdentifier":{ "shape":"String", @@ -2198,15 +2247,15 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

The KMS key identifier for an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.

If you create a Read Replica of an encrypted DB cluster in another region, you must set KmsKeyId to a KMS key ID that is valid in the destination region. This key is used to encrypt the Read Replica in that region.

" + "documentation":"

The AWS KMS key identifier for an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

If an encryption key is not specified in KmsKeyId:

AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

If you create a Read Replica of an encrypted DB cluster in another AWS Region, you must set KmsKeyId to a KMS key ID that is valid in the destination AWS Region. This key is used to encrypt the Read Replica in that AWS Region.

" }, "PreSignedUrl":{ "shape":"String", - "documentation":"

A URL that contains a Signature Version 4 signed request for the CreateDBCluster action to be called in the source region where the DB cluster will be replicated from. You only need to specify PreSignedUrl when you are performing cross-region replication from an encrypted DB cluster.

The pre-signed URL must be a valid request for the CreateDBCluster API action that can be executed in the source region that contains the encrypted DB cluster to be copied.

The pre-signed URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" + "documentation":"

A URL that contains a Signature Version 4 signed request for the CreateDBCluster action to be called in the source AWS Region where the DB cluster is replicated from. You only need to specify PreSignedUrl when you are performing cross-region replication from an encrypted DB cluster.

The pre-signed URL must be a valid request for the CreateDBCluster API action that can be executed in the source AWS Region that contains the encrypted DB cluster to be copied.

The pre-signed URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

A Boolean value that is true to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" } }, "documentation":"

" @@ -2221,7 +2270,7 @@ "members":{ "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB cluster parameter group.

Constraints:

This value is stored as a lowercase string.

" + "documentation":"

The name of the DB cluster parameter group.

Constraints:

This value is stored as a lowercase string.

" }, "DBParameterGroupFamily":{ "shape":"String", @@ -2256,11 +2305,11 @@ "members":{ "DBClusterSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier of the DB cluster snapshot. This parameter is stored as a lowercase string.

Constraints:

Example: my-cluster1-snapshot1

" + "documentation":"

The identifier of the DB cluster snapshot. This parameter is stored as a lowercase string.

Constraints:

Example: my-cluster1-snapshot1

" }, "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The identifier of the DB cluster to create a snapshot for. This parameter is not case-sensitive.

Constraints:

Example: my-cluster1

" + "documentation":"

The identifier of the DB cluster to create a snapshot for. This parameter is not case-sensitive.

Constraints:

Example: my-cluster1

" }, "Tags":{ "shape":"TagList", @@ -2285,31 +2334,31 @@ "members":{ "DBName":{ "shape":"String", - "documentation":"

The meaning of this parameter differs according to the database engine you use.

Type: String

MySQL

The name of the database to create when the DB instance is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

MariaDB

The name of the database to create when the DB instance is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

PostgreSQL

The name of the database to create when the DB instance is created. If this parameter is not specified, the default \"postgres\" database is created in the DB instance.

Constraints:

Oracle

The Oracle System ID (SID) of the created DB instance. If you specify null, the default value ORCL is used. You can't specify the string NULL, or any other reserved word, for DBName.

Default: ORCL

Constraints:

SQL Server

Not applicable. Must be null.

Amazon Aurora

The name of the database to create when the primary instance of the DB cluster is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

" + "documentation":"

The meaning of this parameter differs according to the database engine you use.

Type: String

MySQL

The name of the database to create when the DB instance is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

MariaDB

The name of the database to create when the DB instance is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

PostgreSQL

The name of the database to create when the DB instance is created. If this parameter is not specified, the default \"postgres\" database is created in the DB instance.

Constraints:

Oracle

The Oracle System ID (SID) of the created DB instance. If you specify null, the default value ORCL is used. You can't specify the string NULL, or any other reserved word, for DBName.

Default: ORCL

Constraints:

SQL Server

Not applicable. Must be null.

Amazon Aurora

The name of the database to create when the primary instance of the DB cluster is created. If this parameter is not specified, no database is created in the DB instance.

Constraints:

" }, "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier. This parameter is stored as a lowercase string.

Constraints:

Example: mydbinstance

" + "documentation":"

The DB instance identifier. This parameter is stored as a lowercase string.

Constraints:

Example: mydbinstance

" }, "AllocatedStorage":{ "shape":"IntegerOptional", - "documentation":"

The amount of storage (in gigabytes) to be initially allocated for the database instance.

Type: Integer

Amazon Aurora

Not applicable. Aurora cluster volumes automatically grow as the amount of data in your database increases, though you are only charged for the space that you use in an Aurora cluster volume.

MySQL

Constraints: Must be an integer from 5 to 6144.

MariaDB

Constraints: Must be an integer from 5 to 6144.

PostgreSQL

Constraints: Must be an integer from 5 to 6144.

Oracle

Constraints: Must be an integer from 10 to 6144.

SQL Server

Constraints: Must be an integer from 200 to 4096 (Standard Edition and Enterprise Edition) or from 20 to 4096 (Express Edition and Web Edition)

" + "documentation":"

The amount of storage (in gigabytes) to be initially allocated for the DB instance.

Type: Integer

Amazon Aurora

Not applicable. Aurora cluster volumes automatically grow as the amount of data in your database increases, though you are only charged for the space that you use in an Aurora cluster volume.

MySQL

Constraints to the amount of storage for each storage type are the following:

MariaDB

Constraints to the amount of storage for each storage type are the following:

PostgreSQL

Constraints to the amount of storage for each storage type are the following:

Oracle

Constraints to the amount of storage for each storage type are the following:

SQL Server

Constraints to the amount of storage for each storage type are the following:

" }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The compute and memory capacity of the DB instance. Note that not all instance classes are available in all regions for all DB engines.

Valid Values: db.t1.micro | db.m1.small | db.m1.medium | db.m1.large | db.m1.xlarge | db.m2.xlarge |db.m2.2xlarge | db.m2.4xlarge | db.m3.medium | db.m3.large | db.m3.xlarge | db.m3.2xlarge | db.m4.large | db.m4.xlarge | db.m4.2xlarge | db.m4.4xlarge | db.m4.10xlarge | db.r3.large | db.r3.xlarge | db.r3.2xlarge | db.r3.4xlarge | db.r3.8xlarge | db.t2.micro | db.t2.small | db.t2.medium | db.t2.large

" + "documentation":"

The compute and memory capacity of the DB instance, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the database engine to be used for this instance.

Not every database engine is available for every AWS region.

Valid Values:

" + "documentation":"

The name of the database engine to be used for this instance.

Not every database engine is available for every AWS Region.

Valid Values:

" }, "MasterUsername":{ "shape":"String", - "documentation":"

The name for the master database user.

Amazon Aurora

Not applicable. You specify the name for the master database user when you create your DB cluster.

MariaDB

Constraints:

Microsoft SQL Server

Constraints:

MySQL

Constraints:

Oracle

Constraints:

PostgreSQL

Constraints:

" + "documentation":"

The name for the master user.

Amazon Aurora

Not applicable. The name for the master user is managed by the DB cluster. For more information, see CreateDBCluster.

MariaDB

Constraints:

Microsoft SQL Server

Constraints:

MySQL

Constraints:

Oracle

Constraints:

PostgreSQL

Constraints:

" }, "MasterUserPassword":{ "shape":"String", - "documentation":"

The password for the master database user. Can be any printable ASCII character except \"/\", \"\"\", or \"@\".

Amazon Aurora

Not applicable. You specify the password for the master database user when you create your DB cluster.

MariaDB

Constraints: Must contain from 8 to 41 characters.

Microsoft SQL Server

Constraints: Must contain from 8 to 128 characters.

MySQL

Constraints: Must contain from 8 to 41 characters.

Oracle

Constraints: Must contain from 8 to 30 characters.

PostgreSQL

Constraints: Must contain from 8 to 128 characters.

" + "documentation":"

The password for the master user. The password can include any printable ASCII character except \"/\", \"\"\", or \"@\".

Amazon Aurora

Not applicable. The password for the master user is managed by the DB cluster. For more information, see CreateDBCluster.

MariaDB

Constraints: Must contain from 8 to 41 characters.

Microsoft SQL Server

Constraints: Must contain from 8 to 128 characters.

MySQL

Constraints: Must contain from 8 to 41 characters.

Oracle

Constraints: Must contain from 8 to 30 characters.

PostgreSQL

Constraints: Must contain from 8 to 128 characters.

" }, "DBSecurityGroups":{ "shape":"DBSecurityGroupNameList", @@ -2317,11 +2366,11 @@ }, "VpcSecurityGroupIds":{ "shape":"VpcSecurityGroupIdList", - "documentation":"

A list of EC2 VPC security groups to associate with this DB instance.

Default: The default EC2 VPC security group for the DB subnet group's VPC.

" + "documentation":"

A list of EC2 VPC security groups to associate with this DB instance.

Amazon Aurora

Not applicable. The associated list of EC2 VPC security groups is managed by the DB cluster. For more information, see CreateDBCluster.

Default: The default EC2 VPC security group for the DB subnet group's VPC.

" }, "AvailabilityZone":{ "shape":"String", - "documentation":"

The EC2 Availability Zone that the database instance will be created in. For information on regions and Availability Zones, see Regions and Availability Zones.

Default: A random, system-chosen Availability Zone in the endpoint's region.

Example: us-east-1d

Constraint: The AvailabilityZone parameter cannot be specified if the MultiAZ parameter is set to true. The specified Availability Zone must be in the same region as the current endpoint.

" + "documentation":"

The EC2 Availability Zone that the DB instance is created in. For information on AWS Regions and Availability Zones, see Regions and Availability Zones.

Default: A random, system-chosen Availability Zone in the endpoint's AWS Region.

Example: us-east-1d

Constraint: The AvailabilityZone parameter can't be specified if the MultiAZ parameter is set to true. The specified Availability Zone must be in the same AWS Region as the current endpoint.

" }, "DBSubnetGroupName":{ "shape":"String", @@ -2329,19 +2378,19 @@ }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC). For more information, see DB Instance Maintenance.

Format: ddd:hh24:mi-ddd:hh24:mi

Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun

Constraints: Minimum 30-minute window.

" + "documentation":"

The time range each week during which system maintenance can occur, in Universal Coordinated Time (UTC). For more information, see Amazon RDS Maintenance Window.

Format: ddd:hh24:mi-ddd:hh24:mi

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.

Constraints: Minimum 30-minute window.

" }, "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB parameter group to associate with this DB instance. If this argument is omitted, the default DBParameterGroup for the specified engine will be used.

Constraints:

" + "documentation":"

The name of the DB parameter group to associate with this DB instance. If this argument is omitted, the default DBParameterGroup for the specified engine is used.

Constraints:

" }, "BackupRetentionPeriod":{ "shape":"IntegerOptional", - "documentation":"

The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.

Default: 1

Constraints:

" + "documentation":"

The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.

Amazon Aurora

Not applicable. The retention period for automated backups is managed by the DB cluster. For more information, see CreateDBCluster.

Default: 1

Constraints:

" }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter. For more information, see DB Instance Backups.

Default: A 30-minute window selected at random from an 8-hour block of time per region. To see the time blocks available, see Adjusting the Preferred DB Instance Maintenance Window.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter. For more information, see The Backup Window.

Amazon Aurora

Not applicable. The daily time range for creating automated backups is managed by the DB cluster. For more information, see CreateDBCluster.

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred DB Instance Maintenance Window.

Constraints:

" }, "Port":{ "shape":"IntegerOptional", @@ -2349,15 +2398,15 @@ }, "MultiAZ":{ "shape":"BooleanOptional", - "documentation":"

Specifies if the DB instance is a Multi-AZ deployment. You cannot set the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" + "documentation":"

Specifies if the DB instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The version number of the database engine to use.

The following are the database engines and major and minor versions that are available with Amazon RDS. Not every database engine is available for every AWS region.

Amazon Aurora

MariaDB

Microsoft SQL Server 2016

Microsoft SQL Server 2014

Microsoft SQL Server 2012

Microsoft SQL Server 2008 R2

MySQL

Oracle 12c

Oracle 11g

PostgreSQL

" + "documentation":"

The version number of the database engine to use.

The following are the database engines and major and minor versions that are available with Amazon RDS. Not every database engine is available for every AWS Region.

Amazon Aurora

Not applicable. The version number of the database engine to be used by the DB instance is managed by the DB cluster. For more information, see CreateDBCluster.

MariaDB

Microsoft SQL Server 2016

Microsoft SQL Server 2014

Microsoft SQL Server 2012

Microsoft SQL Server 2008 R2

MySQL

Oracle 12c

Oracle 11g

PostgreSQL

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"

Indicates that minor engine upgrades will be applied automatically to the DB instance during the maintenance window.

Default: true

" + "documentation":"

Indicates that minor engine upgrades are applied automatically to the DB instance during the maintenance window.

Default: true

" }, "LicenseModel":{ "shape":"String", @@ -2365,19 +2414,19 @@ }, "Iops":{ "shape":"IntegerOptional", - "documentation":"

The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for the DB instance.

Constraints: Must be a multiple between 3 and 10 of the storage amount for the DB instance. Must also be an integer multiple of 1000. For example, if the size of your DB instance is 500 GB, then your Iops value can be 2000, 3000, 4000, or 5000.

" + "documentation":"

The amount of Provisioned IOPS (input/output operations per second) to be initially allocated for the DB instance. For information about valid Iops values, see see Amazon RDS Provisioned IOPS Storage to Improve Performance.

Constraints: Must be a multiple between 3 and 10 of the storage amount for the DB instance. Must also be an integer multiple of 1000. For example, if the size of your DB instance is 500 GB, then your Iops value can be 2000, 3000, 4000, or 5000.

" }, "OptionGroupName":{ "shape":"String", - "documentation":"

Indicates that the DB instance should be associated with the specified option group.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, cannot be removed from an option group, and that option group cannot be removed from a DB instance once it is associated with a DB instance

" + "documentation":"

Indicates that the DB instance should be associated with the specified option group.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance

" }, "CharacterSetName":{ "shape":"String", - "documentation":"

For supported engines, indicates that the DB instance should be associated with the specified CharacterSet.

" + "documentation":"

For supported engines, indicates that the DB instance should be associated with the specified CharacterSet.

Amazon Aurora

Not applicable. The character set is managed by the DB cluster. For more information, see CreateDBCluster.

" }, "PubliclyAccessible":{ "shape":"BooleanOptional", - "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be private.

" + "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is private.

" }, "Tags":{"shape":"TagList"}, "DBClusterIdentifier":{ @@ -2386,23 +2435,23 @@ }, "StorageType":{ "shape":"String", - "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified, otherwise standard

" }, "TdeCredentialArn":{ "shape":"String", - "documentation":"

The ARN from the Key Store with which to associate the instance for TDE encryption.

" + "documentation":"

The ARN from the key store with which to associate the instance for TDE encryption.

" }, "TdeCredentialPassword":{ "shape":"String", - "documentation":"

The password for the given ARN from the Key Store in order to access the device.

" + "documentation":"

The password for the given ARN from the key store in order to access the device.

" }, "StorageEncrypted":{ "shape":"BooleanOptional", - "documentation":"

Specifies whether the DB instance is encrypted.

Default: false

" + "documentation":"

Specifies whether the DB instance is encrypted.

Amazon Aurora

Not applicable. The encryption for DB instances is managed by the DB cluster. For more information, see CreateDBCluster.

Default: false

" }, "KmsKeyId":{ "shape":"String", - "documentation":"

The KMS key identifier for an encrypted DB instance.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB instance with the same AWS account that owns the KMS encryption key used to encrypt the new DB instance, then you can use the KMS key alias instead of the ARN for the KM encryption key.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.

" + "documentation":"

The AWS KMS key identifier for an encrypted DB instance.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB instance with the same AWS account that owns the KMS encryption key used to encrypt the new DB instance, then you can use the KMS key alias instead of the ARN for the KM encryption key.

Amazon Aurora

Not applicable. The KMS key identifier is managed by the DB cluster. For more information, see CreateDBCluster.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

" }, "Domain":{ "shape":"String", @@ -2410,7 +2459,7 @@ }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the DB instance to snapshots of the DB instance; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the DB instance to snapshots of the DB instance, and otherwise false. The default is false.

" }, "MonitoringInterval":{ "shape":"IntegerOptional", @@ -2418,7 +2467,7 @@ }, "MonitoringRoleArn":{ "shape":"String", - "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to Setting Up and Enabling Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" + "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to Setting Up and Enabling Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" }, "DomainIAMRoleName":{ "shape":"String", @@ -2434,7 +2483,15 @@ }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts; otherwise false.

You can enable IAM database authentication for the following database engines:

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

You can enable IAM database authentication for the following database engines:

Amazon Aurora

Not applicable. Mapping AWS IAM accounts to database accounts is managed by the DB cluster. For more information, see CreateDBCluster.

MySQL

Default: false

" + }, + "EnablePerformanceInsights":{ + "shape":"BooleanOptional", + "documentation":"

True to enable Performance Insights for the DB instance, and otherwise false.

" + }, + "PerformanceInsightsKMSKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

" } }, "documentation":"

" @@ -2452,15 +2509,15 @@ }, "SourceDBInstanceIdentifier":{ "shape":"String", - "documentation":"

The identifier of the DB instance that will act as the source for the Read Replica. Each DB instance can have up to five Read Replicas.

Constraints:

" + "documentation":"

The identifier of the DB instance that will act as the source for the Read Replica. Each DB instance can have up to five Read Replicas.

Constraints:

" }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The compute and memory capacity of the Read Replica. Note that not all instance classes are available in all regions for all DB engines.

Valid Values: db.m1.small | db.m1.medium | db.m1.large | db.m1.xlarge | db.m2.xlarge |db.m2.2xlarge | db.m2.4xlarge | db.m3.medium | db.m3.large | db.m3.xlarge | db.m3.2xlarge | db.m4.large | db.m4.xlarge | db.m4.2xlarge | db.m4.4xlarge | db.m4.10xlarge | db.r3.large | db.r3.xlarge | db.r3.2xlarge | db.r3.4xlarge | db.r3.8xlarge | db.t2.micro | db.t2.small | db.t2.medium | db.t2.large

Default: Inherits from the source DB instance.

" + "documentation":"

The compute and memory capacity of the Read Replica, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

Default: Inherits from the source DB instance.

" }, "AvailabilityZone":{ "shape":"String", - "documentation":"

The Amazon EC2 Availability Zone that the Read Replica will be created in.

Default: A random, system-chosen Availability Zone in the endpoint's region.

Example: us-east-1d

" + "documentation":"

The Amazon EC2 Availability Zone that the Read Replica is created in.

Default: A random, system-chosen Availability Zone in the endpoint's AWS Region.

Example: us-east-1d

" }, "Port":{ "shape":"IntegerOptional", @@ -2468,7 +2525,7 @@ }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"

Indicates that minor engine upgrades will be applied automatically to the Read Replica during the maintenance window.

Default: Inherits from the source DB instance

" + "documentation":"

Indicates that minor engine upgrades are applied automatically to the Read Replica during the maintenance window.

Default: Inherits from the source DB instance

" }, "Iops":{ "shape":"IntegerOptional", @@ -2476,24 +2533,24 @@ }, "OptionGroupName":{ "shape":"String", - "documentation":"

The option group the DB instance will be associated with. If omitted, the default option group for the engine specified will be used.

" + "documentation":"

The option group the DB instance is associated with. If omitted, the default option group for the engine specified is used.

" }, "PubliclyAccessible":{ "shape":"BooleanOptional", - "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be private.

" + "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is private.

" }, "Tags":{"shape":"TagList"}, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

Specifies a DB subnet group for the DB instance. The new DB instance will be created in the VPC associated with the DB subnet group. If no DB subnet group is specified, then the new DB instance is not created in a VPC.

Constraints:

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

Specifies a DB subnet group for the DB instance. The new DB instance is created in the VPC associated with the DB subnet group. If no DB subnet group is specified, then the new DB instance is not created in a VPC.

Constraints:

Example: mySubnetgroup

" }, "StorageType":{ "shape":"String", - "documentation":"

Specifies the storage type to be associated with the Read Replica.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + "documentation":"

Specifies the storage type to be associated with the Read Replica.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified, otherwise standard

" }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the Read Replica to snapshots of the Read Replica; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the Read Replica to snapshots of the Read Replica, and otherwise false. The default is false.

" }, "MonitoringInterval":{ "shape":"IntegerOptional", @@ -2501,19 +2558,27 @@ }, "MonitoringRoleArn":{ "shape":"String", - "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to To create an IAM role for Amazon RDS Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" + "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to To create an IAM role for Amazon RDS Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" }, "KmsKeyId":{ "shape":"String", - "documentation":"

The AWS KMS key ID for an encrypted Read Replica. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you create an unencrypted Read Replica and specify a value for the KmsKeyId parameter, Amazon RDS encrypts the target Read Replica using the specified KMS encryption key.

If you create an encrypted Read Replica from your AWS account, you can specify a value for KmsKeyId to encrypt the Read Replica with a new KMS encryption key. If you don't specify a value for KmsKeyId, then the Read Replica is encrypted with the same KMS key as the source DB instance.

If you create an encrypted Read Replica in a different AWS region, then you must specify a KMS key for the destination AWS region. KMS encryption keys are specific to the region that they are created in, and you cannot use encryption keys from one region in another region.

" + "documentation":"

The AWS KMS key ID for an encrypted Read Replica. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

If you specify this parameter when you create a Read Replica from an unencrypted DB instance, the Read Replica is encrypted.

If you create an encrypted Read Replica in the same AWS Region as the source DB instance, then you do not have to specify a value for this parameter. The Read Replica is encrypted with the same KMS key as the source DB instance.

If you create an encrypted Read Replica in a different AWS Region, then you must specify a KMS key for the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.

" }, "PreSignedUrl":{ "shape":"String", - "documentation":"

The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the AWS region that contains the source DB instance. The PreSignedUrl parameter must be used when encrypting a Read Replica from another AWS region.

The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source region that contains the encrypted DB instance. The presigned URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" + "documentation":"

The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the source AWS Region that contains the source DB instance.

You must specify this parameter when you create an encrypted Read Replica from another AWS Region by using the Amazon RDS API. You can specify the --source-region option instead of this parameter when you create an encrypted Read Replica from another AWS Region by using the AWS CLI.

The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source AWS Region that contains the encrypted source DB instance. The presigned URL request must contain the following parameter values:

To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts; otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" + }, + "EnablePerformanceInsights":{ + "shape":"BooleanOptional", + "documentation":"

True to enable Performance Insights for the read replica, and otherwise false.

" + }, + "PerformanceInsightsKMSKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

" } } }, @@ -2539,7 +2604,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB parameter group.

Constraints:

This value is stored as a lowercase string.

" + "documentation":"

The name of the DB parameter group.

Constraints:

This value is stored as a lowercase string.

" }, "DBParameterGroupFamily":{ "shape":"String", @@ -2568,7 +2633,7 @@ "members":{ "DBSecurityGroupName":{ "shape":"String", - "documentation":"

The name for the DB security group. This value is stored as a lowercase string.

Constraints:

Example: mysecuritygroup

" + "documentation":"

The name for the DB security group. This value is stored as a lowercase string.

Constraints:

Example: mysecuritygroup

" }, "DBSecurityGroupDescription":{ "shape":"String", @@ -2593,11 +2658,11 @@ "members":{ "DBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier for the DB snapshot.

Constraints:

Example: my-snapshot-id

" + "documentation":"

The identifier for the DB snapshot.

Constraints:

Example: my-snapshot-id

" }, "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier. This is the unique key that identifies a DB instance.

Constraints:

" + "documentation":"

The identifier of the DB instance that you want to create the snapshot of.

Constraints:

" }, "Tags":{"shape":"TagList"} }, @@ -2619,7 +2684,7 @@ "members":{ "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The name for the DB subnet group. This value is stored as a lowercase string.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The name for the DB subnet group. This value is stored as a lowercase string.

Constraints: Must contain no more than 255 letters, numbers, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" }, "DBSubnetGroupDescription":{ "shape":"String", @@ -2656,7 +2721,7 @@ }, "SourceType":{ "shape":"String", - "documentation":"

The type of source that will be generating the events. For example, if you want to be notified of events generated by a DB instance, you would set this parameter to db-instance. if this value is not specified, all events are returned.

Valid values: db-instance | db-cluster | db-parameter-group | db-security-group | db-snapshot | db-cluster-snapshot

" + "documentation":"

The type of source that is generating the events. For example, if you want to be notified of events generated by a DB instance, you would set this parameter to db-instance. if this value is not specified, all events are returned.

Valid values: db-instance | db-cluster | db-parameter-group | db-security-group | db-snapshot | db-cluster-snapshot

" }, "EventCategories":{ "shape":"EventCategoriesList", @@ -2664,7 +2729,7 @@ }, "SourceIds":{ "shape":"SourceIdsList", - "documentation":"

The list of identifiers of the event sources for which events will be returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it cannot end with a hyphen or contain two consecutive hyphens.

Constraints:

" + "documentation":"

The list of identifiers of the event sources for which events are returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it can't end with a hyphen or contain two consecutive hyphens.

Constraints:

" }, "Enabled":{ "shape":"BooleanOptional", @@ -2691,7 +2756,7 @@ "members":{ "OptionGroupName":{ "shape":"String", - "documentation":"

Specifies the name of the option group to be created.

Constraints:

Example: myoptiongroup

" + "documentation":"

Specifies the name of the option group to be created.

Constraints:

Example: myoptiongroup

" }, "EngineName":{ "shape":"String", @@ -2768,7 +2833,7 @@ }, "ReaderEndpoint":{ "shape":"String", - "documentation":"

The reader endpoint for the DB cluster. The reader endpoint for a DB cluster load-balances connections across the Aurora Replicas that are available in a DB cluster. As clients request new connections to the reader endpoint, Aurora distributes the connection requests among the Aurora Replicas in the DB cluster. This functionality can help balance your read workload across multiple Aurora Replicas in your DB cluster.

If a failover occurs, and the Aurora Replica that you are connected to is promoted to be the primary instance, your connection will be dropped. To continue sending your read workload to other Aurora Replicas in the cluster, you can then reconnect to the reader endpoint.

" + "documentation":"

The reader endpoint for the DB cluster. The reader endpoint for a DB cluster load-balances connections across the Aurora Replicas that are available in a DB cluster. As clients request new connections to the reader endpoint, Aurora distributes the connection requests among the Aurora Replicas in the DB cluster. This functionality can help balance your read workload across multiple Aurora Replicas in your DB cluster.

If a failover occurs, and the Aurora Replica that you are connected to is promoted to be the primary instance, your connection is dropped. To continue sending your read workload to other Aurora Replicas in the cluster, you can then reconnect to the reader endpoint.

" }, "MultiAZ":{ "shape":"Boolean", @@ -2832,11 +2897,11 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

If StorageEncrypted is true, the KMS key identifier for the encrypted DB cluster.

" + "documentation":"

If StorageEncrypted is true, the AWS KMS key identifier for the encrypted DB cluster.

" }, "DbClusterResourceId":{ "shape":"String", - "documentation":"

The region-unique, immutable identifier for the DB cluster. This identifier is found in AWS CloudTrail log entries whenever the KMS key for the DB cluster is accessed.

" + "documentation":"

The AWS Region-unique, immutable identifier for the DB cluster. This identifier is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB cluster is accessed.

" }, "DBClusterArn":{ "shape":"String", @@ -2848,7 +2913,7 @@ }, "IAMDatabaseAuthenticationEnabled":{ "shape":"Boolean", - "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled; otherwise false.

" + "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.

" }, "CloneGroupId":{ "shape":"String", @@ -2859,7 +2924,7 @@ "documentation":"

Specifies the time when the DB cluster was created, in Universal Coordinated Time (UTC).

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBClusters action.

", + "documentation":"

Contains the details of an Amazon RDS DB cluster.

This data type is used as a response element in the DescribeDBClusters action.

", "wrapper":true }, "DBClusterAlreadyExistsFault":{ @@ -2978,7 +3043,7 @@ "documentation":"

The Amazon Resource Name (ARN) for the DB cluster parameter group.

" } }, - "documentation":"

Contains the result of a successful invocation of the CreateDBClusterParameterGroup or CopyDBClusterParameterGroup action.

This data type is used as a request parameter in the DeleteDBClusterParameterGroup action, and as a response element in the DescribeDBClusterParameterGroups action.

", + "documentation":"

Contains the details of an Amazon RDS DB cluster parameter group.

This data type is used as a response element in the DescribeDBClusterParameterGroups action.

", "wrapper":true }, "DBClusterParameterGroupDetails":{ @@ -3007,7 +3072,7 @@ "members":{ "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB cluster parameter group.

Constraints:

This value is stored as a lowercase string.

" + "documentation":"

The name of the DB cluster parameter group.

Constraints:

This value is stored as a lowercase string.

" } }, "documentation":"

" @@ -3176,7 +3241,7 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

If StorageEncrypted is true, the KMS key identifier for the encrypted DB cluster snapshot.

" + "documentation":"

If StorageEncrypted is true, the AWS KMS key identifier for the encrypted DB cluster snapshot.

" }, "DBClusterSnapshotArn":{ "shape":"String", @@ -3184,14 +3249,14 @@ }, "SourceDBClusterSnapshotArn":{ "shape":"String", - "documentation":"

If the DB cluster snapshot was copied from a source DB cluster snapshot, the Amazon Resource Name (ARN) for the source DB cluster snapshot; otherwise, a null value.

" + "documentation":"

If the DB cluster snapshot was copied from a source DB cluster snapshot, the Amazon Resource Name (ARN) for the source DB cluster snapshot, otherwise, a null value.

" }, "IAMDatabaseAuthenticationEnabled":{ "shape":"Boolean", - "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled; otherwise false.

" + "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBClusterSnapshots action.

", + "documentation":"

Contains the details for an Amazon RDS DB cluster snapshot

This data type is used as a response element in the DescribeDBClusterSnapshots action.

", "wrapper":true }, "DBClusterSnapshotAlreadyExistsFault":{ @@ -3463,11 +3528,11 @@ }, "PubliclyAccessible":{ "shape":"Boolean", - "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be private.

" + "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is private.

" }, "StatusInfos":{ "shape":"DBInstanceStatusInfoList", - "documentation":"

The status of a Read Replica. If the instance is not a Read Replica, this will be blank.

" + "documentation":"

The status of a Read Replica. If the instance is not a Read Replica, this is blank.

" }, "StorageType":{ "shape":"String", @@ -3491,11 +3556,11 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

If StorageEncrypted is true, the KMS key identifier for the encrypted DB instance.

" + "documentation":"

If StorageEncrypted is true, the AWS KMS key identifier for the encrypted DB instance.

" }, "DbiResourceId":{ "shape":"String", - "documentation":"

The region-unique, immutable identifier for the DB instance. This identifier is found in AWS CloudTrail log entries whenever the KMS key for the DB instance is accessed.

" + "documentation":"

The AWS Region-unique, immutable identifier for the DB instance. This identifier is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB instance is accessed.

" }, "CACertificateIdentifier":{ "shape":"String", @@ -3519,7 +3584,7 @@ }, "MonitoringRoleArn":{ "shape":"String", - "documentation":"

The ARN for the IAM role that permits RDS to send Enhanced Monitoring metrics to CloudWatch Logs.

" + "documentation":"

The ARN for the IAM role that permits RDS to send Enhanced Monitoring metrics to Amazon CloudWatch Logs.

" }, "PromotionTier":{ "shape":"IntegerOptional", @@ -3535,10 +3600,18 @@ }, "IAMDatabaseAuthenticationEnabled":{ "shape":"Boolean", - "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled; otherwise false.

IAM database authentication can be enabled for the following database engines

" + "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.

IAM database authentication can be enabled for the following database engines

" + }, + "PerformanceInsightsEnabled":{ + "shape":"BooleanOptional", + "documentation":"

True if Performance Insights is enabled for the DB instance, and otherwise false.

" + }, + "PerformanceInsightsKMSKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBInstances action.

", + "documentation":"

Contains the details of an Amazon RDS DB instance.

This data type is used as a response element in the DescribeDBInstances action.

", "wrapper":true }, "DBInstanceAlreadyExistsFault":{ @@ -3647,7 +3720,7 @@ "documentation":"

The Amazon Resource Name (ARN) for the DB parameter group.

" } }, - "documentation":"

Contains the result of a successful invocation of the CreateDBParameterGroup action.

This data type is used as a request parameter in the DeleteDBParameterGroup action, and as a response element in the DescribeDBParameterGroups action.

", + "documentation":"

Contains the details of an Amazon RDS DB parameter group.

This data type is used as a response element in the DescribeDBParameterGroups action.

", "wrapper":true }, "DBParameterGroupAlreadyExistsFault":{ @@ -3784,7 +3857,7 @@ "documentation":"

The Amazon Resource Name (ARN) for the DB security group.

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBSecurityGroups action.

", + "documentation":"

Contains the details for an Amazon RDS DB security group.

This data type is used as a response element in the DescribeDBSecurityGroups action.

", "wrapper":true }, "DBSecurityGroupAlreadyExistsFault":{ @@ -3957,11 +4030,11 @@ }, "SourceRegion":{ "shape":"String", - "documentation":"

The region that the DB snapshot was created in or copied from.

" + "documentation":"

The AWS Region that the DB snapshot was created in or copied from.

" }, "SourceDBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The DB snapshot Arn that the DB snapshot was copied from. It only has value in case of cross customer or cross region copy.

" + "documentation":"

The DB snapshot Amazon Resource Name (ARN) that the DB snapshot was copied from. It only has value in case of cross-customer or cross-region copy.

" }, "StorageType":{ "shape":"String", @@ -3977,7 +4050,7 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

If Encrypted is true, the KMS key identifier for the encrypted DB snapshot.

" + "documentation":"

If Encrypted is true, the AWS KMS key identifier for the encrypted DB snapshot.

" }, "DBSnapshotArn":{ "shape":"String", @@ -3989,10 +4062,10 @@ }, "IAMDatabaseAuthenticationEnabled":{ "shape":"Boolean", - "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled; otherwise false.

" + "documentation":"

True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBSnapshots action.

", + "documentation":"

Contains the details of an Amazon RDS DB snapshot.

This data type is used as a response element in the DescribeDBSnapshots action.

", "wrapper":true }, "DBSnapshotAlreadyExistsFault":{ @@ -4105,7 +4178,7 @@ "documentation":"

The Amazon Resource Name (ARN) for the DB subnet group.

" } }, - "documentation":"

Contains the result of a successful invocation of the following actions:

This data type is used as a response element in the DescribeDBSubnetGroups action.

", + "documentation":"

Contains the details of an Amazon RDS DB subnet group.

This data type is used as a response element in the DescribeDBSubnetGroups action.

", "wrapper":true }, "DBSubnetGroupAlreadyExistsFault":{ @@ -4219,7 +4292,7 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The DB cluster identifier for the DB cluster to be deleted. This parameter isn't case-sensitive.

Constraints:

" + "documentation":"

The DB cluster identifier for the DB cluster to be deleted. This parameter isn't case-sensitive.

Constraints:

" }, "SkipFinalSnapshot":{ "shape":"Boolean", @@ -4227,7 +4300,7 @@ }, "FinalDBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The DB cluster snapshot identifier of the new DB cluster snapshot created when SkipFinalSnapshot is set to false.

Specifying this parameter and also setting the SkipFinalShapshot parameter to true results in an error.

Constraints:

" + "documentation":"

The DB cluster snapshot identifier of the new DB cluster snapshot created when SkipFinalSnapshot is set to false.

Specifying this parameter and also setting the SkipFinalShapshot parameter to true results in an error.

Constraints:

" } }, "documentation":"

" @@ -4238,7 +4311,7 @@ "members":{ "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB cluster parameter group.

Constraints:

" + "documentation":"

The name of the DB cluster parameter group.

Constraints:

" } }, "documentation":"

" @@ -4272,7 +4345,7 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier for the DB instance to be deleted. This parameter isn't case-sensitive.

Constraints:

" + "documentation":"

The DB instance identifier for the DB instance to be deleted. This parameter isn't case-sensitive.

Constraints:

" }, "SkipFinalSnapshot":{ "shape":"Boolean", @@ -4280,7 +4353,7 @@ }, "FinalDBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The DBSnapshotIdentifier of the new DBSnapshot created when SkipFinalSnapshot is set to false.

Specifying this parameter and also setting the SkipFinalShapshot parameter to true results in an error.

Constraints:

" + "documentation":"

The DBSnapshotIdentifier of the new DBSnapshot created when SkipFinalSnapshot is set to false.

Specifying this parameter and also setting the SkipFinalShapshot parameter to true results in an error.

Constraints:

" } }, "documentation":"

" @@ -4297,7 +4370,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB parameter group.

Constraints:

" + "documentation":"

The name of the DB parameter group.

Constraints:

" } }, "documentation":"

" @@ -4308,7 +4381,7 @@ "members":{ "DBSecurityGroupName":{ "shape":"String", - "documentation":"

The name of the DB security group to delete.

You cannot delete the default DB security group.

Constraints:

" + "documentation":"

The name of the DB security group to delete.

You can't delete the default DB security group.

Constraints:

" } }, "documentation":"

" @@ -4336,7 +4409,7 @@ "members":{ "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The name of the database subnet group to delete.

You cannot delete the default subnet group.

Constraints:

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The name of the database subnet group to delete.

You can't delete the default subnet group.

Constraints:

Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.

Example: mySubnetgroup

" } }, "documentation":"

" @@ -4364,7 +4437,7 @@ "members":{ "OptionGroupName":{ "shape":"String", - "documentation":"

The name of the option group to be deleted.

You cannot delete default option groups.

" + "documentation":"

The name of the option group to be deleted.

You can't delete default option groups.

" } }, "documentation":"

" @@ -4380,7 +4453,7 @@ "members":{ "CertificateIdentifier":{ "shape":"String", - "documentation":"

The user-supplied certificate identifier. If this parameter is specified, information for only the identified certificate is returned. This parameter isn't case-sensitive.

Constraints:

" + "documentation":"

The user-supplied certificate identifier. If this parameter is specified, information for only the identified certificate is returned. This parameter isn't case-sensitive.

Constraints:

" }, "Filters":{ "shape":"FilterList", @@ -4402,7 +4475,7 @@ "members":{ "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of a specific DB cluster parameter group to return details for.

Constraints:

" + "documentation":"

The name of a specific DB cluster parameter group to return details for.

Constraints:

" }, "Filters":{ "shape":"FilterList", @@ -4425,7 +4498,7 @@ "members":{ "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of a specific DB cluster parameter group to return parameter details for.

Constraints:

" + "documentation":"

The name of a specific DB cluster parameter group to return parameter details for.

Constraints:

" }, "Source":{ "shape":"String", @@ -4468,11 +4541,11 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The ID of the DB cluster to retrieve the list of DB cluster snapshots for. This parameter cannot be used in conjunction with the DBClusterSnapshotIdentifier parameter. This parameter is not case-sensitive.

Constraints:

" + "documentation":"

The ID of the DB cluster to retrieve the list of DB cluster snapshots for. This parameter can't be used in conjunction with the DBClusterSnapshotIdentifier parameter. This parameter is not case-sensitive.

Constraints:

" }, "DBClusterSnapshotIdentifier":{ "shape":"String", - "documentation":"

A specific DB cluster snapshot identifier to describe. This parameter cannot be used in conjunction with the DBClusterIdentifier parameter. This value is stored as a lowercase string.

Constraints:

" + "documentation":"

A specific DB cluster snapshot identifier to describe. This parameter can't be used in conjunction with the DBClusterIdentifier parameter. This value is stored as a lowercase string.

Constraints:

" }, "SnapshotType":{ "shape":"String", @@ -4492,11 +4565,11 @@ }, "IncludeShared":{ "shape":"Boolean", - "documentation":"

Set this value to true to include shared manual DB cluster snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. The default is false.

You can give an AWS account permission to restore a manual DB cluster snapshot from another AWS account by the ModifyDBClusterSnapshotAttribute API action.

" + "documentation":"

True to include shared manual DB cluster snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false. The default is false.

You can give an AWS account permission to restore a manual DB cluster snapshot from another AWS account by the ModifyDBClusterSnapshotAttribute API action.

" }, "IncludePublic":{ "shape":"Boolean", - "documentation":"

Set this value to true to include manual DB cluster snapshots that are public and can be copied or restored by any AWS account, otherwise set this value to false. The default is false. The default is false.

You can share a manual DB cluster snapshot as public by using the ModifyDBClusterSnapshotAttribute API action.

" + "documentation":"

True to include manual DB cluster snapshots that are public and can be copied or restored by any AWS account, and otherwise false. The default is false. The default is false.

You can share a manual DB cluster snapshot as public by using the ModifyDBClusterSnapshotAttribute API action.

" } }, "documentation":"

" @@ -4506,7 +4579,7 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The user-supplied DB cluster identifier. If this parameter is specified, information from only the specific DB cluster is returned. This parameter isn't case-sensitive.

Constraints:

" + "documentation":"

The user-supplied DB cluster identifier. If this parameter is specified, information from only the specific DB cluster is returned. This parameter isn't case-sensitive.

Constraints:

" }, "Filters":{ "shape":"FilterList", @@ -4518,7 +4591,7 @@ }, "Marker":{ "shape":"String", - "documentation":"

An optional pagination token provided by a previous DescribeDBClusters request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" + "documentation":"

An optional pagination token provided by a previous DescribeDBClusters request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" } }, "documentation":"

" @@ -4536,7 +4609,7 @@ }, "DBParameterGroupFamily":{ "shape":"String", - "documentation":"

The name of a specific DB parameter group family to return details for.

Constraints:

" + "documentation":"

The name of a specific DB parameter group family to return details for.

Constraints:

" }, "Filters":{ "shape":"FilterList", @@ -4569,11 +4642,11 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn't case-sensitive.

Constraints:

" + "documentation":"

The user-supplied instance identifier. If this parameter is specified, information from only the specific DB instance is returned. This parameter isn't case-sensitive.

Constraints:

" }, "Filters":{ "shape":"FilterList", - "documentation":"

A filter that specifies one or more DB instances to describe.

Supported filters:

" + "documentation":"

A filter that specifies one or more DB instances to describe.

Supported filters:

" }, "MaxRecords":{ "shape":"IntegerOptional", @@ -4617,7 +4690,7 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The customer-assigned name of the DB instance that contains the log files you want to list.

Constraints:

" + "documentation":"

The customer-assigned name of the DB instance that contains the log files you want to list.

Constraints:

" }, "FilenameContains":{ "shape":"String", @@ -4665,7 +4738,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of a specific DB parameter group to return details for.

Constraints:

" + "documentation":"

The name of a specific DB parameter group to return details for.

Constraints:

" }, "Filters":{ "shape":"FilterList", @@ -4688,7 +4761,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of a specific DB parameter group to return details for.

Constraints:

" + "documentation":"

The name of a specific DB parameter group to return details for.

Constraints:

" }, "Source":{ "shape":"String", @@ -4752,11 +4825,11 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The ID of the DB instance to retrieve the list of DB snapshots for. This parameter cannot be used in conjunction with DBSnapshotIdentifier. This parameter is not case-sensitive.

Constraints:

" + "documentation":"

The ID of the DB instance to retrieve the list of DB snapshots for. This parameter can't be used in conjunction with DBSnapshotIdentifier. This parameter is not case-sensitive.

Constraints:

" }, "DBSnapshotIdentifier":{ "shape":"String", - "documentation":"

A specific DB snapshot identifier to describe. This parameter cannot be used in conjunction with DBInstanceIdentifier. This value is stored as a lowercase string.

Constraints:

" + "documentation":"

A specific DB snapshot identifier to describe. This parameter can't be used in conjunction with DBInstanceIdentifier. This value is stored as a lowercase string.

Constraints:

" }, "SnapshotType":{ "shape":"String", @@ -4776,11 +4849,11 @@ }, "IncludeShared":{ "shape":"Boolean", - "documentation":"

Set this value to true to include shared manual DB snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, otherwise set this value to false. The default is false.

You can give an AWS account permission to restore a manual DB snapshot from another AWS account by using the ModifyDBSnapshotAttribute API action.

" + "documentation":"

True to include shared manual DB snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false. The default is false.

You can give an AWS account permission to restore a manual DB snapshot from another AWS account by using the ModifyDBSnapshotAttribute API action.

" }, "IncludePublic":{ "shape":"Boolean", - "documentation":"

Set this value to true to include manual DB snapshots that are public and can be copied or restored by any AWS account, otherwise set this value to false. The default is false.

You can share a manual DB snapshot as public by using the ModifyDBSnapshotAttribute API.

" + "documentation":"

True to include manual DB snapshots that are public and can be copied or restored by any AWS account, and otherwise false. The default is false.

You can share a manual DB snapshot as public by using the ModifyDBSnapshotAttribute API.

" } }, "documentation":"

" @@ -4870,7 +4943,7 @@ "members":{ "SourceType":{ "shape":"String", - "documentation":"

The type of source that will be generating the events.

Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot

" + "documentation":"

The type of source that is generating the events.

Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot

" }, "Filters":{ "shape":"FilterList", @@ -4906,7 +4979,7 @@ "members":{ "SourceIdentifier":{ "shape":"String", - "documentation":"

The identifier of the event source for which events will be returned. If not specified, then all sources are included in the response.

Constraints:

" + "documentation":"

The identifier of the event source for which events are returned. If not specified, then all sources are included in the response.

Constraints:

" }, "SourceType":{ "shape":"SourceType", @@ -4949,7 +5022,7 @@ "members":{ "EngineName":{ "shape":"String", - "documentation":"

A required parameter. Options available for the given engine name will be described.

" + "documentation":"

A required parameter. Options available for the given engine name are described.

" }, "MajorEngineVersion":{ "shape":"String", @@ -5154,7 +5227,7 @@ "members":{ "RegionName":{ "shape":"String", - "documentation":"

The source region name. For example, us-east-1.

Constraints:

" + "documentation":"

The source AWS Region name. For example, us-east-1.

Constraints:

" }, "MaxRecords":{ "shape":"IntegerOptional", @@ -5162,7 +5235,7 @@ }, "Marker":{ "shape":"String", - "documentation":"

An optional pagination token provided by a previous DescribeSourceRegions request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" + "documentation":"

An optional pagination token provided by a previous DescribeSourceRegions request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords.

" }, "Filters":{ "shape":"FilterList", @@ -5171,6 +5244,23 @@ }, "documentation":"

" }, + "DescribeValidDBInstanceModificationsMessage":{ + "type":"structure", + "required":["DBInstanceIdentifier"], + "members":{ + "DBInstanceIdentifier":{ + "shape":"String", + "documentation":"

The customer identifier or the ARN of your DB instance.

" + } + }, + "documentation":"

" + }, + "DescribeValidDBInstanceModificationsResult":{ + "type":"structure", + "members":{ + "ValidDBInstanceModificationsMessage":{"shape":"ValidDBInstanceModificationsMessage"} + } + }, "DomainMembership":{ "type":"structure", "members":{ @@ -5214,6 +5304,28 @@ "exception":true }, "Double":{"type":"double"}, + "DoubleOptional":{"type":"double"}, + "DoubleRange":{ + "type":"structure", + "members":{ + "From":{ + "shape":"Double", + "documentation":"

The minimum value in the range.

" + }, + "To":{ + "shape":"Double", + "documentation":"

The maximum value in the range.

" + } + }, + "documentation":"

A range of double values.

" + }, + "DoubleRangeList":{ + "type":"list", + "member":{ + "shape":"DoubleRange", + "locationName":"DoubleRange" + } + }, "DownloadDBLogFilePortionDetails":{ "type":"structure", "members":{ @@ -5241,7 +5353,7 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The customer-assigned name of the DB instance that contains the log files you want to list.

Constraints:

" + "documentation":"

The customer-assigned name of the DB instance that contains the log files you want to list.

Constraints:

" }, "LogFileName":{ "shape":"String", @@ -5253,7 +5365,7 @@ }, "NumberOfLines":{ "shape":"Integer", - "documentation":"

The number of lines to download. If the number of lines specified results in a file over 1 MB in size, the file will be truncated at 1 MB in size.

If the NumberOfLines parameter is specified, then the block of lines returned can be from the beginning or the end of the log file, depending on the value of the Marker parameter.

" + "documentation":"

The number of lines to download. If the number of lines specified results in a file over 1 MB in size, the file is truncated at 1 MB in size.

If the NumberOfLines parameter is specified, then the block of lines returned can be from the beginning or the end of the log file, depending on the value of the Marker parameter.

" } }, "documentation":"

" @@ -5499,7 +5611,7 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

A DB cluster identifier to force a failover for. This parameter is not case-sensitive.

Constraints:

" + "documentation":"

A DB cluster identifier to force a failover for. This parameter is not case-sensitive.

Constraints:

" }, "TargetDBInstanceIdentifier":{ "shape":"String", @@ -5835,11 +5947,11 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The DB cluster identifier for the cluster being modified. This parameter is not case-sensitive.

Constraints:

" + "documentation":"

The DB cluster identifier for the cluster being modified. This parameter is not case-sensitive.

Constraints:

" }, "NewDBClusterIdentifier":{ "shape":"String", - "documentation":"

The new DB cluster identifier for the DB cluster when renaming a DB cluster. This value is stored as a lowercase string.

Constraints:

Example: my-cluster2

" + "documentation":"

The new DB cluster identifier for the DB cluster when renaming a DB cluster. This value is stored as a lowercase string.

Constraints:

Example: my-cluster2

" }, "ApplyImmediately":{ "shape":"Boolean", @@ -5867,19 +5979,19 @@ }, "OptionGroupName":{ "shape":"String", - "documentation":"

A value that indicates that the DB cluster should be associated with the specified option group. Changing this parameter does not result in an outage except in the following case, and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.

Permanent options cannot be removed from an option group. The option group cannot be removed from a DB cluster once it is associated with a DB cluster.

" + "documentation":"

A value that indicates that the DB cluster should be associated with the specified option group. Changing this parameter does not result in an outage except in the following case, and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.

Permanent options can't be removed from an option group. The option group can't be removed from a DB cluster once it is associated with a DB cluster.

" }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter.

Default: A 30-minute window selected at random from an 8-hour block of time per region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter.

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun

Constraints: Minimum 30-minute window.

" + "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.

Constraints: Minimum 30-minute window.

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

A Boolean value that is true to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" } }, "documentation":"

" @@ -5946,35 +6058,35 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier. This value is stored as a lowercase string.

Constraints:

" + "documentation":"

The DB instance identifier. This value is stored as a lowercase string.

Constraints:

" }, "AllocatedStorage":{ "shape":"IntegerOptional", - "documentation":"

The new storage capacity of the RDS instance. Changing this setting does not result in an outage and the change is applied during the next maintenance window unless ApplyImmediately is set to true for this request.

MySQL

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

MariaDB

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

PostgreSQL

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

Oracle

Default: Uses existing setting

Valid Values: 10-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

SQL Server

Cannot be modified.

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance will be available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance will be suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.

" + "documentation":"

The new storage capacity of the RDS instance. Changing this setting does not result in an outage and the change is applied during the next maintenance window unless ApplyImmediately is set to true for this request.

MySQL

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

MariaDB

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

PostgreSQL

Default: Uses existing setting

Valid Values: 5-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

Type: Integer

Oracle

Default: Uses existing setting

Valid Values: 10-6144

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.

SQL Server

Cannot be modified.

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.

" }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The new compute and memory capacity of the DB instance. To determine the instance classes that are available for a particular DB engine, use the DescribeOrderableDBInstanceOptions action. Note that not all instance classes are available in all regions for all DB engines.

Passing a value for this setting causes an outage during the change and is applied during the next maintenance window, unless ApplyImmediately is specified as true for this request.

Default: Uses existing setting

Valid Values: db.t1.micro | db.m1.small | db.m1.medium | db.m1.large | db.m1.xlarge | db.m2.xlarge | db.m2.2xlarge | db.m2.4xlarge | db.m3.medium | db.m3.large | db.m3.xlarge | db.m3.2xlarge | db.m4.large | db.m4.xlarge | db.m4.2xlarge | db.m4.4xlarge | db.m4.10xlarge | db.r3.large | db.r3.xlarge | db.r3.2xlarge | db.r3.4xlarge | db.r3.8xlarge | db.t2.micro | db.t2.small | db.t2.medium | db.t2.large

" + "documentation":"

The new compute and memory capacity of the DB instance, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

If you modify the DB instance class, an outage occurs during the change. The change is applied during the next maintenance window, unless ApplyImmediately is specified as true for this request.

Default: Uses existing setting

" }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The new DB subnet group for the DB instance. You can use this parameter to move your DB instance to a different VPC. If your DB instance is not in a VPC, you can also use this parameter to move your DB instance into a VPC. For more information, see Updating the VPC for a DB Instance.

Changing the subnet group causes an outage during the change. The change is applied during the next maintenance window, unless you specify true for the ApplyImmediately parameter.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens.

Example: mySubnetGroup

" + "documentation":"

The new DB subnet group for the DB instance. You can use this parameter to move your DB instance to a different VPC. If your DB instance is not in a VPC, you can also use this parameter to move your DB instance into a VPC. For more information, see Updating the VPC for a DB Instance.

Changing the subnet group causes an outage during the change. The change is applied during the next maintenance window, unless you specify true for the ApplyImmediately parameter.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetGroup

" }, "DBSecurityGroups":{ "shape":"DBSecurityGroupNameList", - "documentation":"

A list of DB security groups to authorize on this DB instance. Changing this setting does not result in an outage and the change is asynchronously applied as soon as possible.

Constraints:

" + "documentation":"

A list of DB security groups to authorize on this DB instance. Changing this setting does not result in an outage and the change is asynchronously applied as soon as possible.

Constraints:

" }, "VpcSecurityGroupIds":{ "shape":"VpcSecurityGroupIdList", - "documentation":"

A list of EC2 VPC security groups to authorize on this DB instance. This change is asynchronously applied as soon as possible.

Constraints:

" + "documentation":"

A list of EC2 VPC security groups to authorize on this DB instance. This change is asynchronously applied as soon as possible.

Amazon Aurora

Not applicable. The associated list of EC2 VPC security groups is managed by the DB cluster. For more information, see ModifyDBCluster.

Constraints:

" }, "ApplyImmediately":{ "shape":"Boolean", - "documentation":"

Specifies whether the modifications in this request and any pending modifications are asynchronously applied as soon as possible, regardless of the PreferredMaintenanceWindow setting for the DB instance.

If this parameter is set to false, changes to the DB instance are applied during the next maintenance window. Some parameter changes can cause an outage and will be applied on the next call to RebootDBInstance, or the next failure reboot. Review the table of parameters in Modifying a DB Instance and Using the Apply Immediately Parameter to see the impact that setting ApplyImmediately to true or false has for each modified parameter and to determine when the changes will be applied.

Default: false

" + "documentation":"

Specifies whether the modifications in this request and any pending modifications are asynchronously applied as soon as possible, regardless of the PreferredMaintenanceWindow setting for the DB instance.

If this parameter is set to false, changes to the DB instance are applied during the next maintenance window. Some parameter changes can cause an outage and are applied on the next call to RebootDBInstance, or the next failure reboot. Review the table of parameters in Modifying a DB Instance and Using the Apply Immediately Parameter to see the impact that setting ApplyImmediately to true or false has for each modified parameter and to determine when the changes are applied.

Default: false

" }, "MasterUserPassword":{ "shape":"String", - "documentation":"

The new password for the DB instance master user. Can be any printable ASCII character except \"/\", \"\"\", or \"@\".

Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible. Between the time of the request and the completion of the request, the MasterUserPassword element exists in the PendingModifiedValues element of the operation response.

Default: Uses existing setting

Constraints: Must be 8 to 41 alphanumeric characters (MySQL, MariaDB, and Amazon Aurora), 8 to 30 alphanumeric characters (Oracle), or 8 to 128 alphanumeric characters (SQL Server).

Amazon RDS API actions never return the password, so this action provides a way to regain access to a primary instance user if the password is lost. This includes restoring privileges that might have been accidentally revoked.

" + "documentation":"

The new password for the master user. The password can include any printable ASCII character except \"/\", \"\"\", or \"@\".

Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible. Between the time of the request and the completion of the request, the MasterUserPassword element exists in the PendingModifiedValues element of the operation response.

Amazon Aurora

Not applicable. The password for the master user is managed by the DB cluster. For more information, see ModifyDBCluster.

Default: Uses existing setting

MariaDB

Constraints: Must contain from 8 to 41 characters.

Microsoft SQL Server

Constraints: Must contain from 8 to 128 characters.

MySQL

Constraints: Must contain from 8 to 41 characters.

Oracle

Constraints: Must contain from 8 to 30 characters.

PostgreSQL

Constraints: Must contain from 8 to 128 characters.

Amazon RDS API actions never return the password, so this action provides a way to regain access to a primary instance user if the password is lost. This includes restoring privileges that might have been accidentally revoked.

" }, "DBParameterGroupName":{ "shape":"String", @@ -5982,11 +6094,11 @@ }, "BackupRetentionPeriod":{ "shape":"IntegerOptional", - "documentation":"

The number of days to retain automated backups. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.

Changing this parameter can result in an outage if you change from 0 to a non-zero value or from a non-zero value to 0. These changes are applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If you change the parameter from one non-zero value to another non-zero value, the change is asynchronously applied as soon as possible.

Default: Uses existing setting

Constraints:

" + "documentation":"

The number of days to retain automated backups. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.

Changing this parameter can result in an outage if you change from 0 to a non-zero value or from a non-zero value to 0. These changes are applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If you change the parameter from one non-zero value to another non-zero value, the change is asynchronously applied as soon as possible.

Amazon Aurora

Not applicable. The retention period for automated backups is managed by the DB cluster. For more information, see ModifyDBCluster.

Default: Uses existing setting

Constraints:

" }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, as determined by the BackupRetentionPeriod parameter. Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, as determined by the BackupRetentionPeriod parameter. Changing this parameter does not result in an outage and the change is asynchronously applied as soon as possible.

Amazon Aurora

Not applicable. The daily time range for creating automated backups is managed by the DB cluster. For more information, see ModifyDBCluster.

Constraints:

" }, "PreferredMaintenanceWindow":{ "shape":"String", @@ -5998,7 +6110,7 @@ }, "EngineVersion":{ "shape":"String", - "documentation":"

The version number of the database engine to upgrade to. Changing this parameter results in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request.

For major version upgrades, if a non-default DB parameter group is currently in use, a new DB parameter group in the DB parameter group family for the new engine version must be specified. The new DB parameter group can be the default for that DB parameter group family.

For a list of valid engine versions, see CreateDBInstance.

" + "documentation":"

The version number of the database engine to upgrade to. Changing this parameter results in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request.

For major version upgrades, if a nondefault DB parameter group is currently in use, a new DB parameter group in the DB parameter group family for the new engine version must be specified. The new DB parameter group can be the default for that DB parameter group family.

For a list of valid engine versions, see CreateDBInstance.

" }, "AllowMajorVersionUpgrade":{ "shape":"Boolean", @@ -6006,7 +6118,7 @@ }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"

Indicates that minor version upgrades will be applied automatically to the DB instance during the maintenance window. Changing this parameter does not result in an outage except in the following case and the change is asynchronously applied as soon as possible. An outage will result if this parameter is set to true during the maintenance window, and a newer minor version is available, and RDS has enabled auto patching for that engine version.

" + "documentation":"

Indicates that minor version upgrades are applied automatically to the DB instance during the maintenance window. Changing this parameter does not result in an outage except in the following case and the change is asynchronously applied as soon as possible. An outage will result if this parameter is set to true during the maintenance window, and a newer minor version is available, and RDS has enabled auto patching for that engine version.

" }, "LicenseModel":{ "shape":"String", @@ -6014,27 +6126,27 @@ }, "Iops":{ "shape":"IntegerOptional", - "documentation":"

The new Provisioned IOPS (I/O operations per second) value for the RDS instance. Changing this setting does not result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request.

Default: Uses existing setting

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.

SQL Server

Setting the IOPS value for the SQL Server database engine is not supported.

Type: Integer

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance will be available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance will be suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.

" + "documentation":"

The new Provisioned IOPS (I/O operations per second) value for the RDS instance. Changing this setting does not result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request.

Default: Uses existing setting

Constraints: Value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.

SQL Server

Setting the IOPS value for the SQL Server database engine is not supported.

Type: Integer

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a Read Replica for the instance, and creating a DB snapshot of the instance.

" }, "OptionGroupName":{ "shape":"String", - "documentation":"

Indicates that the DB instance should be associated with the specified option group. Changing this parameter does not result in an outage except in the following case and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, cannot be removed from an option group, and that option group cannot be removed from a DB instance once it is associated with a DB instance

" + "documentation":"

Indicates that the DB instance should be associated with the specified option group. Changing this parameter does not result in an outage except in the following case and the change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance

" }, "NewDBInstanceIdentifier":{ "shape":"String", - "documentation":"

The new DB instance identifier for the DB instance when renaming a DB instance. When you change the DB instance identifier, an instance reboot will occur immediately if you set Apply Immediately to true, or will occur during the next maintenance window if Apply Immediately to false. This value is stored as a lowercase string.

Constraints:

" + "documentation":"

The new DB instance identifier for the DB instance when renaming a DB instance. When you change the DB instance identifier, an instance reboot will occur immediately if you set Apply Immediately to true, or will occur during the next maintenance window if Apply Immediately to false. This value is stored as a lowercase string.

Constraints:

Example: mydbinstance

" }, "StorageType":{ "shape":"String", - "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified, otherwise standard

" }, "TdeCredentialArn":{ "shape":"String", - "documentation":"

The ARN from the Key Store with which to associate the instance for TDE encryption.

" + "documentation":"

The ARN from the key store with which to associate the instance for TDE encryption.

" }, "TdeCredentialPassword":{ "shape":"String", - "documentation":"

The password for the given ARN from the Key Store in order to access the device.

" + "documentation":"

The password for the given ARN from the key store in order to access the device.

" }, "CACertificateIdentifier":{ "shape":"String", @@ -6046,7 +6158,7 @@ }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the DB instance to snapshots of the DB instance; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the DB instance to snapshots of the DB instance, and otherwise false. The default is false.

" }, "MonitoringInterval":{ "shape":"IntegerOptional", @@ -6062,7 +6174,7 @@ }, "MonitoringRoleArn":{ "shape":"String", - "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to To create an IAM role for Amazon RDS Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" + "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, go to To create an IAM role for Amazon RDS Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" }, "DomainIAMRoleName":{ "shape":"String", @@ -6074,7 +6186,15 @@ }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts; otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

You can enable IAM database authentication for the following database engines

Amazon Aurora

Not applicable. Mapping AWS IAM accounts to database accounts is managed by the DB cluster. For more information, see ModifyDBCluster.

MySQL

Default: false

" + }, + "EnablePerformanceInsights":{ + "shape":"BooleanOptional", + "documentation":"

True to enable Performance Insights for the DB instance, and otherwise false.

" + }, + "PerformanceInsightsKMSKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.

" } }, "documentation":"

" @@ -6094,7 +6214,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB parameter group.

Constraints:

" + "documentation":"

The name of the DB parameter group.

Constraints:

" }, "Parameters":{ "shape":"ParametersList", @@ -6145,7 +6265,11 @@ }, "EngineVersion":{ "shape":"String", - "documentation":"

The engine version to update the DB snapshot to.

" + "documentation":"

The engine version to upgrade the DB snapshot to.

The following are the database engines and engine versions that are available when you upgrade a DB snapshot.

MySQL

Oracle

" + }, + "OptionGroupName":{ + "shape":"String", + "documentation":"

The option group to identify with the upgraded DB snapshot.

You can specify this parameter when you upgrade an Oracle DB snapshot. The same option group considerations apply when upgrading a DB snapshot as when upgrading a DB instance. For more information, see Option Group Considerations.

" } } }, @@ -6164,7 +6288,7 @@ "members":{ "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The name for the DB subnet group. This value is stored as a lowercase string.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The name for the DB subnet group. This value is stored as a lowercase string. You can't modify the default subnet group.

Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.

Example: mySubnetgroup

" }, "DBSubnetGroupDescription":{ "shape":"String", @@ -6197,7 +6321,7 @@ }, "SourceType":{ "shape":"String", - "documentation":"

The type of source that will be generating the events. For example, if you want to be notified of events generated by a DB instance, you would set this parameter to db-instance. if this value is not specified, all events are returned.

Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot

" + "documentation":"

The type of source that is generating the events. For example, if you want to be notified of events generated by a DB instance, you would set this parameter to db-instance. if this value is not specified, all events are returned.

Valid values: db-instance | db-parameter-group | db-security-group | db-snapshot

" }, "EventCategories":{ "shape":"EventCategoriesList", @@ -6222,7 +6346,7 @@ "members":{ "OptionGroupName":{ "shape":"String", - "documentation":"

The name of the option group to be modified.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, cannot be removed from an option group, and that option group cannot be removed from a DB instance once it is associated with a DB instance

" + "documentation":"

The name of the option group to be modified.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance

" }, "OptionsToInclude":{ "shape":"OptionConfigurationList", @@ -6456,6 +6580,18 @@ "shape":"Boolean", "documentation":"

Permanent options can never be removed from an option group. An option group containing a permanent option can't be removed from a DB instance.

" }, + "RequiresAutoMinorEngineVersionUpgrade":{ + "shape":"Boolean", + "documentation":"

If true, you must enable the Auto Minor Version Upgrade setting for your DB instance before you can use this option. You can enable Auto Minor Version Upgrade when you first create your DB instance, or by modifying your DB instance later.

" + }, + "VpcOnly":{ + "shape":"Boolean", + "documentation":"

If true, you can only use this option with a DB instance that is in a VPC.

" + }, + "SupportsOptionVersionDowngrade":{ + "shape":"BooleanOptional", + "documentation":"

If true, you can change the option to an earlier version of the option. This only applies to options that have different versions available.

" + }, "OptionGroupOptionSettings":{ "shape":"OptionGroupOptionSettingsList", "documentation":"

The option settings that are available (and the default value) for each option in an option group.

" @@ -6632,7 +6768,7 @@ }, "IsDefault":{ "shape":"Boolean", - "documentation":"

True if the version is the default version of the option; otherwise, false.

" + "documentation":"

True if the version is the default version of the option, and otherwise false.

" } }, "documentation":"

The version for an option. Option group option versions are returned by the DescribeOptionGroupOptions action.

" @@ -6663,58 +6799,86 @@ "members":{ "Engine":{ "shape":"String", - "documentation":"

The engine type of the orderable DB instance.

" + "documentation":"

The engine type of a DB instance.

" }, "EngineVersion":{ "shape":"String", - "documentation":"

The engine version of the orderable DB instance.

" + "documentation":"

The engine version of a DB instance.

" }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The DB instance class for the orderable DB instance.

" + "documentation":"

The DB instance class for a DB instance.

" }, "LicenseModel":{ "shape":"String", - "documentation":"

The license model for the orderable DB instance.

" + "documentation":"

The license model for a DB instance.

" }, "AvailabilityZones":{ "shape":"AvailabilityZoneList", - "documentation":"

A list of Availability Zones for the orderable DB instance.

" + "documentation":"

A list of Availability Zones for a DB instance.

" }, "MultiAZCapable":{ "shape":"Boolean", - "documentation":"

Indicates whether this orderable DB instance is multi-AZ capable.

" + "documentation":"

Indicates whether a DB instance is Multi-AZ capable.

" }, "ReadReplicaCapable":{ "shape":"Boolean", - "documentation":"

Indicates whether this orderable DB instance can have a Read Replica.

" + "documentation":"

Indicates whether a DB instance can have a Read Replica.

" }, "Vpc":{ "shape":"Boolean", - "documentation":"

Indicates whether this is a VPC orderable DB instance.

" + "documentation":"

Indicates whether a DB instance is in a VPC.

" }, "SupportsStorageEncryption":{ "shape":"Boolean", - "documentation":"

Indicates whether this orderable DB instance supports encrypted storage.

" + "documentation":"

Indicates whether a DB instance supports encrypted storage.

" }, "StorageType":{ "shape":"String", - "documentation":"

Indicates the storage type for this orderable DB instance.

" + "documentation":"

Indicates the storage type for a DB instance.

" }, "SupportsIops":{ "shape":"Boolean", - "documentation":"

Indicates whether this orderable DB instance supports provisioned IOPS.

" + "documentation":"

Indicates whether a DB instance supports provisioned IOPS.

" }, "SupportsEnhancedMonitoring":{ "shape":"Boolean", - "documentation":"

Indicates whether the DB instance supports enhanced monitoring at intervals from 1 to 60 seconds.

" + "documentation":"

Indicates whether a DB instance supports Enhanced Monitoring at intervals from 1 to 60 seconds.

" }, "SupportsIAMDatabaseAuthentication":{ "shape":"Boolean", - "documentation":"

Indicates whether this orderable DB instance supports IAM database authentication.

" + "documentation":"

Indicates whether a DB instance supports IAM database authentication.

" + }, + "SupportsPerformanceInsights":{ + "shape":"Boolean", + "documentation":"

True if a DB instance supports Performance Insights, otherwise false.

" + }, + "MinStorageSize":{ + "shape":"IntegerOptional", + "documentation":"

Minimum storage size for a DB instance.

" + }, + "MaxStorageSize":{ + "shape":"IntegerOptional", + "documentation":"

Maximum storage size for a DB instance.

" + }, + "MinIopsPerDbInstance":{ + "shape":"IntegerOptional", + "documentation":"

Minimum total provisioned IOPS for a DB instance.

" + }, + "MaxIopsPerDbInstance":{ + "shape":"IntegerOptional", + "documentation":"

Maximum total provisioned IOPS for a DB instance.

" + }, + "MinIopsPerGib":{ + "shape":"DoubleOptional", + "documentation":"

Minimum provisioned IOPS per GiB for a DB instance.

" + }, + "MaxIopsPerGib":{ + "shape":"DoubleOptional", + "documentation":"

Maximum provisioned IOPS per GiB for a DB instance.

" } }, - "documentation":"

Contains a list of available options for a DB instance

This data type is used as a response element in the DescribeOrderableDBInstanceOptions action.

", + "documentation":"

Contains a list of available options for a DB instance.

This data type is used as a response element in the DescribeOrderableDBInstanceOptions action.

", "wrapper":true }, "OrderableDBInstanceOptionsList":{ @@ -6800,11 +6964,11 @@ }, "AutoAppliedAfterDate":{ "shape":"TStamp", - "documentation":"

The date of the maintenance window when the action will be applied. The maintenance action will be applied to the resource during its first maintenance window after this date. If this date is specified, any next-maintenance opt-in requests are ignored.

" + "documentation":"

The date of the maintenance window when the action is applied. The maintenance action is applied to the resource during its first maintenance window after this date. If this date is specified, any next-maintenance opt-in requests are ignored.

" }, "ForcedApplyDate":{ "shape":"TStamp", - "documentation":"

The date when the maintenance action will be automatically applied. The maintenance action will be applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any immediate opt-in requests are ignored.

" + "documentation":"

The date when the maintenance action is automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource. If this date is specified, any immediate opt-in requests are ignored.

" }, "OptInStatus":{ "shape":"String", @@ -6812,7 +6976,7 @@ }, "CurrentApplyDate":{ "shape":"TStamp", - "documentation":"

The effective date when the pending maintenance action will be applied to the resource. This date takes into account opt-in requests received from the ApplyPendingMaintenanceAction API, the AutoAppliedAfterDate, and the ForcedApplyDate. This value is blank if an opt-in request has not been received and nothing has been specified as AutoAppliedAfterDate or ForcedApplyDate.

" + "documentation":"

The effective date when the pending maintenance action is applied to the resource. This date takes into account opt-in requests received from the ApplyPendingMaintenanceAction API, the AutoAppliedAfterDate, and the ForcedApplyDate. This value is blank if an opt-in request has not been received and nothing has been specified as AutoAppliedAfterDate or ForcedApplyDate.

" }, "Description":{ "shape":"String", @@ -6854,15 +7018,15 @@ "members":{ "DBInstanceClass":{ "shape":"String", - "documentation":"

Contains the new DBInstanceClass for the DB instance that will be applied or is in progress.

" + "documentation":"

Contains the new DBInstanceClass for the DB instance that will be applied or is currently being applied.

" }, "AllocatedStorage":{ "shape":"IntegerOptional", - "documentation":"

Contains the new AllocatedStorage size for the DB instance that will be applied or is in progress.

" + "documentation":"

Contains the new AllocatedStorage size for the DB instance that will be applied or is currently being applied.

" }, "MasterUserPassword":{ "shape":"String", - "documentation":"

Contains the pending or in-progress change of the master credentials for the DB instance.

" + "documentation":"

Contains the pending or currently-in-progress change of the master credentials for the DB instance.

" }, "Port":{ "shape":"IntegerOptional", @@ -6886,11 +7050,11 @@ }, "Iops":{ "shape":"IntegerOptional", - "documentation":"

Specifies the new Provisioned IOPS value for the DB instance that will be applied or is being applied.

" + "documentation":"

Specifies the new Provisioned IOPS value for the DB instance that will be applied or is currently being applied.

" }, "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

Contains the new DBInstanceIdentifier for the DB instance that will be applied or is in progress.

" + "documentation":"

Contains the new DBInstanceIdentifier for the DB instance that will be applied or is currently being applied.

" }, "StorageType":{ "shape":"String", @@ -6925,7 +7089,7 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The identifier of the DB cluster Read Replica to promote. This parameter is not case-sensitive.

Constraints:

Example: my-cluster-replica1

" + "documentation":"

The identifier of the DB cluster Read Replica to promote. This parameter is not case-sensitive.

Constraints:

Example: my-cluster-replica1

" } }, "documentation":"

" @@ -6942,7 +7106,7 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier. This value is stored as a lowercase string.

Constraints:

Example: mydbinstance

" + "documentation":"

The DB instance identifier. This value is stored as a lowercase string.

Constraints:

Example: mydbinstance

" }, "BackupRetentionPeriod":{ "shape":"IntegerOptional", @@ -6950,7 +7114,7 @@ }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter.

Default: A 30-minute window selected at random from an 8-hour block of time per region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod parameter.

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" } }, "documentation":"

" @@ -6999,6 +7163,31 @@ "ReservedDBInstance":{"shape":"ReservedDBInstance"} } }, + "Range":{ + "type":"structure", + "members":{ + "From":{ + "shape":"Integer", + "documentation":"

The minimum value in the range.

" + }, + "To":{ + "shape":"Integer", + "documentation":"

The maximum value in the range.

" + }, + "Step":{ + "shape":"IntegerOptional", + "documentation":"

The step value for the range. For example, if you have a range of 5,000 to 10,000, with a step value of 1,000, the valid values start at 5,000 and step up by 1,000. Even though 7,500 is within the range, it isn't a valid value for the range. The valid values are 5,000, 6,000, 7,000, 8,000...

" + } + }, + "documentation":"

A range of integer values.

" + }, + "RangeList":{ + "type":"list", + "member":{ + "shape":"Range", + "locationName":"Range" + } + }, "ReadReplicaDBClusterIdentifierList":{ "type":"list", "member":{ @@ -7026,11 +7215,11 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

The DB instance identifier. This parameter is stored as a lowercase string.

Constraints:

" + "documentation":"

The DB instance identifier. This parameter is stored as a lowercase string.

Constraints:

" }, "ForceFailover":{ "shape":"BooleanOptional", - "documentation":"

When true, the reboot will be conducted through a MultiAZ failover.

Constraint: You cannot specify true if the instance is not configured for MultiAZ.

" + "documentation":"

When true, the reboot is conducted through a MultiAZ failover.

Constraint: You can't specify true if the instance is not configured for MultiAZ.

" } }, "documentation":"

" @@ -7113,7 +7302,7 @@ "members":{ "ResourceName":{ "shape":"String", - "documentation":"

The Amazon RDS resource the tags will be removed from. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

" + "documentation":"

The Amazon RDS resource that the tags are removed from. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN).

" }, "TagKeys":{ "shape":"KeyList", @@ -7336,11 +7525,11 @@ }, "ResetAllParameters":{ "shape":"Boolean", - "documentation":"

A value that is set to true to reset all parameters in the DB cluster parameter group to their default values, and false otherwise. You cannot use this parameter if there is a list of parameter names specified for the Parameters parameter.

" + "documentation":"

A value that is set to true to reset all parameters in the DB cluster parameter group to their default values, and false otherwise. You can't use this parameter if there is a list of parameter names specified for the Parameters parameter.

" }, "Parameters":{ "shape":"ParametersList", - "documentation":"

A list of parameter names in the DB cluster parameter group to reset to the default values. You cannot use this parameter if the ResetAllParameters parameter is set to true.

" + "documentation":"

A list of parameter names in the DB cluster parameter group to reset to the default values. You can't use this parameter if the ResetAllParameters parameter is set to true.

" } }, "documentation":"

" @@ -7351,7 +7540,7 @@ "members":{ "DBParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB parameter group.

Constraints:

" + "documentation":"

The name of the DB parameter group.

Constraints:

" }, "ResetAllParameters":{ "shape":"Boolean", @@ -7422,11 +7611,11 @@ }, "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The name of the DB cluster to create from the source data in the S3 bucket. This parameter is isn't case-sensitive.

Constraints:

Example: my-cluster1

" + "documentation":"

The name of the DB cluster to create from the source data in the Amazon S3 bucket. This parameter is isn't case-sensitive.

Constraints:

Example: my-cluster1

" }, "DBClusterParameterGroupName":{ "shape":"String", - "documentation":"

The name of the DB cluster parameter group to associate with the restored DB cluster. If this argument is omitted, default.aurora5.6 will be used.

Constraints:

" + "documentation":"

The name of the DB cluster parameter group to associate with the restored DB cluster. If this argument is omitted, default.aurora5.6 is used.

Constraints:

" }, "VpcSecurityGroupIds":{ "shape":"VpcSecurityGroupIdList", @@ -7434,11 +7623,11 @@ }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

A DB subnet group to associate with the restored DB cluster.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

A DB subnet group to associate with the restored DB cluster.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetgroup

" }, "Engine":{ "shape":"String", - "documentation":"

The name of the database engine to be used for the restored DB cluster.

Valid Values: aurora

" + "documentation":"

The name of the database engine to be used for the restored DB cluster.

Valid Values: aurora, aurora-postgresql

" }, "EngineVersion":{ "shape":"String", @@ -7450,7 +7639,7 @@ }, "MasterUsername":{ "shape":"String", - "documentation":"

The name of the master user for the restored DB cluster.

Constraints:

" + "documentation":"

The name of the master user for the restored DB cluster.

Constraints:

" }, "MasterUserPassword":{ "shape":"String", @@ -7458,15 +7647,15 @@ }, "OptionGroupName":{ "shape":"String", - "documentation":"

A value that indicates that the restored DB cluster should be associated with the specified option group.

Permanent options cannot be removed from an option group. An option group cannot be removed from a DB cluster once it is associated with a DB cluster.

" + "documentation":"

A value that indicates that the restored DB cluster should be associated with the specified option group.

Permanent options can't be removed from an option group. An option group can't be removed from a DB cluster once it is associated with a DB cluster.

" }, "PreferredBackupWindow":{ "shape":"String", - "documentation":"

The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.

Default: A 30-minute window selected at random from an 8-hour block of time per region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" + "documentation":"

The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod parameter.

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Constraints:

" }, "PreferredMaintenanceWindow":{ "shape":"String", - "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun

Constraints: Minimum 30-minute window.

" + "documentation":"

The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).

Format: ddd:hh24:mi-ddd:hh24:mi

The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon RDS User Guide.

Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.

Constraints: Minimum 30-minute window.

" }, "Tags":{"shape":"TagList"}, "StorageEncrypted":{ @@ -7475,11 +7664,11 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

The KMS key identifier for an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KM encryption key.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS region.

" + "documentation":"

The AWS KMS key identifier for an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KM encryption key.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

A Boolean value that is true to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" }, "SourceEngine":{ "shape":"String", @@ -7523,11 +7712,11 @@ }, "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The name of the DB cluster to create from the DB cluster snapshot. This parameter isn't case-sensitive.

Constraints:

Example: my-snapshot-id

" + "documentation":"

The name of the DB cluster to create from the DB snapshot or DB cluster snapshot. This parameter isn't case-sensitive.

Constraints:

Example: my-snapshot-id

" }, "SnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier for the DB cluster snapshot to restore from.

Constraints:

" + "documentation":"

The identifier for the DB snapshot or DB cluster snapshot to restore from.

You can use either the name or the Amazon Resource Name (ARN) to specify a DB cluster snapshot. However, you can use only the ARN to specify a DB snapshot.

Constraints:

" }, "Engine":{ "shape":"String", @@ -7543,7 +7732,7 @@ }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The name of the DB subnet group to use for the new DB cluster.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The name of the DB subnet group to use for the new DB cluster.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetgroup

" }, "DatabaseName":{ "shape":"String", @@ -7563,11 +7752,11 @@ }, "KmsKeyId":{ "shape":"String", - "documentation":"

The KMS key identifier to use when restoring an encrypted DB cluster from a DB cluster snapshot.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

If you do not specify a value for the KmsKeyId parameter, then the following will occur:

" + "documentation":"

The AWS KMS key identifier to use when restoring an encrypted DB cluster from a DB snapshot or DB cluster snapshot.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

If you do not specify a value for the KmsKeyId parameter, then the following will occur:

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

A Boolean value that is true to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" } }, "documentation":"

" @@ -7587,15 +7776,15 @@ "members":{ "DBClusterIdentifier":{ "shape":"String", - "documentation":"

The name of the new DB cluster to be created.

Constraints:

" + "documentation":"

The name of the new DB cluster to be created.

Constraints:

" }, "RestoreType":{ "shape":"String", - "documentation":"

The type of restore to be performed. You can specify one of the following values:

Constraints: You cannot specify copy-on-write if the engine version of the source DB cluster is earlier than 1.11.

If you don't specify a RestoreType value, then the new DB cluster is restored as a full copy of the source DB cluster.

" + "documentation":"

The type of restore to be performed. You can specify one of the following values:

Constraints: You can't specify copy-on-write if the engine version of the source DB cluster is earlier than 1.11.

If you don't specify a RestoreType value, then the new DB cluster is restored as a full copy of the source DB cluster.

" }, "SourceDBClusterIdentifier":{ "shape":"String", - "documentation":"

The identifier of the source DB cluster from which to restore.

Constraints:

" + "documentation":"

The identifier of the source DB cluster from which to restore.

Constraints:

" }, "RestoreToTime":{ "shape":"TStamp", @@ -7611,7 +7800,7 @@ }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The DB subnet group name to use for the new DB cluster.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The DB subnet group name to use for the new DB cluster.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetgroup

" }, "OptionGroupName":{ "shape":"String", @@ -7624,11 +7813,11 @@ "Tags":{"shape":"TagList"}, "KmsKeyId":{ "shape":"String", - "documentation":"

The KMS key identifier to use when restoring an encrypted DB cluster from an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

You can restore to a new DB cluster and encrypt the new DB cluster with a KMS key that is different than the KMS key used to encrypt the source DB cluster. The new DB cluster will be encrypted with the KMS key identified by the KmsKeyId parameter.

If you do not specify a value for the KmsKeyId parameter, then the following will occur:

If DBClusterIdentifier refers to a DB cluster that is not encrypted, then the restore request is rejected.

" + "documentation":"

The AWS KMS key identifier to use when restoring an encrypted DB cluster from an encrypted DB cluster.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.

You can restore to a new DB cluster and encrypt the new DB cluster with a KMS key that is different than the KMS key used to encrypt the source DB cluster. The new DB cluster is encrypted with the KMS key identified by the KmsKeyId parameter.

If you do not specify a value for the KmsKeyId parameter, then the following will occur:

If DBClusterIdentifier refers to a DB cluster that is not encrypted, then the restore request is rejected.

" }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

A Boolean value that is true to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" } }, "documentation":"

" @@ -7648,15 +7837,15 @@ "members":{ "DBInstanceIdentifier":{ "shape":"String", - "documentation":"

Name of the DB instance to create from the DB snapshot. This parameter isn't case-sensitive.

Constraints:

Example: my-snapshot-id

" + "documentation":"

Name of the DB instance to create from the DB snapshot. This parameter isn't case-sensitive.

Constraints:

Example: my-snapshot-id

" }, "DBSnapshotIdentifier":{ "shape":"String", - "documentation":"

The identifier for the DB snapshot to restore from.

Constraints:

If you are restoring from a shared manual DB snapshot, the DBSnapshotIdentifier must be the ARN of the shared DB snapshot.

" + "documentation":"

The identifier for the DB snapshot to restore from.

Constraints:

" }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The compute and memory capacity of the Amazon RDS DB instance.

Valid Values: db.t1.micro | db.m1.small | db.m1.medium | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge | db.m3.medium | db.m3.large | db.m3.xlarge | db.m3.2xlarge | db.m4.large | db.m4.xlarge | db.m4.2xlarge | db.m4.4xlarge | db.m4.10xlarge | db.r3.large | db.r3.xlarge | db.r3.2xlarge | db.r3.4xlarge | db.r3.8xlarge | db.t2.micro | db.t2.small | db.t2.medium | db.t2.large

" + "documentation":"

The compute and memory capacity of the Amazon RDS DB instance, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

Default: The same DBInstanceClass as the original DB instance.

" }, "Port":{ "shape":"IntegerOptional", @@ -7664,23 +7853,23 @@ }, "AvailabilityZone":{ "shape":"String", - "documentation":"

The EC2 Availability Zone that the database instance will be created in.

Default: A random, system-chosen Availability Zone.

Constraint: You cannot specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

Example: us-east-1a

" + "documentation":"

The EC2 Availability Zone that the DB instance is created in.

Default: A random, system-chosen Availability Zone.

Constraint: You can't specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

Example: us-east-1a

" }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The DB subnet group name to use for the new instance.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The DB subnet group name to use for the new instance.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetgroup

" }, "MultiAZ":{ "shape":"BooleanOptional", - "documentation":"

Specifies if the DB instance is a Multi-AZ deployment.

Constraint: You cannot specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" + "documentation":"

Specifies if the DB instance is a Multi-AZ deployment.

Constraint: You can't specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" }, "PubliclyAccessible":{ "shape":"BooleanOptional", - "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be private.

" + "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is private.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"

Indicates that minor version upgrades will be applied automatically to the DB instance during the maintenance window.

" + "documentation":"

Indicates that minor version upgrades are applied automatically to the DB instance during the maintenance window.

" }, "LicenseModel":{ "shape":"String", @@ -7692,28 +7881,28 @@ }, "Engine":{ "shape":"String", - "documentation":"

The database engine to use for the new instance.

Default: The same as source

Constraint: Must be compatible with the engine of the source. You can restore a MariaDB 10.1 DB instance from a MySQL 5.6 snapshot.

Valid Values: MySQL | mariadb | oracle-se1 | oracle-se | oracle-ee | sqlserver-ee | sqlserver-se | sqlserver-ex | sqlserver-web | postgres | aurora

" + "documentation":"

The database engine to use for the new instance.

Default: The same as source

Constraint: Must be compatible with the engine of the source. You can restore a MariaDB 10.1 DB instance from a MySQL 5.6 snapshot.

Valid Values:

" }, "Iops":{ "shape":"IntegerOptional", - "documentation":"

Specifies the amount of provisioned IOPS for the DB instance, expressed in I/O operations per second. If this parameter is not specified, the IOPS value will be taken from the backup. If this parameter is set to 0, the new instance will be converted to a non-PIOPS instance, which will take additional time, though your DB instance will be available for connections before the conversion starts.

Constraints: Must be an integer greater than 1000.

SQL Server

Setting the IOPS value for the SQL Server database engine is not supported.

" + "documentation":"

Specifies the amount of provisioned IOPS for the DB instance, expressed in I/O operations per second. If this parameter is not specified, the IOPS value is taken from the backup. If this parameter is set to 0, the new instance is converted to a non-PIOPS instance. The conversion takes additional time, though your DB instance is available for connections before the conversion starts.

The provisioned IOPS value must follow the requirements for your database engine. For more information, see Amazon RDS Provisioned IOPS Storage to Improve Performance.

Constraints: Must be an integer greater than 1000.

" }, "OptionGroupName":{ "shape":"String", - "documentation":"

The name of the option group to be used for the restored DB instance.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, cannot be removed from an option group, and that option group cannot be removed from a DB instance once it is associated with a DB instance

" + "documentation":"

The name of the option group to be used for the restored DB instance.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance

" }, "Tags":{"shape":"TagList"}, "StorageType":{ "shape":"String", - "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified, otherwise standard

" }, "TdeCredentialArn":{ "shape":"String", - "documentation":"

The ARN from the Key Store with which to associate the instance for TDE encryption.

" + "documentation":"

The ARN from the key store with which to associate the instance for TDE encryption.

" }, "TdeCredentialPassword":{ "shape":"String", - "documentation":"

The password for the given ARN from the Key Store in order to access the device.

" + "documentation":"

The password for the given ARN from the key store in order to access the device.

" }, "Domain":{ "shape":"String", @@ -7721,7 +7910,7 @@ }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the restored DB instance to snapshots of the DB instance; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the restored DB instance to snapshots of the DB instance, and otherwise false. The default is false.

" }, "DomainIAMRoleName":{ "shape":"String", @@ -7729,7 +7918,7 @@ }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts; otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" } }, "documentation":"

" @@ -7740,6 +7929,178 @@ "DBInstance":{"shape":"DBInstance"} } }, + "RestoreDBInstanceFromS3Message":{ + "type":"structure", + "required":[ + "DBInstanceIdentifier", + "DBInstanceClass", + "Engine", + "SourceEngine", + "SourceEngineVersion", + "S3BucketName", + "S3IngestionRoleArn" + ], + "members":{ + "DBName":{ + "shape":"String", + "documentation":"

The name of the database to create when the DB instance is created. Follow the naming rules specified in CreateDBInstance.

" + }, + "DBInstanceIdentifier":{ + "shape":"String", + "documentation":"

The DB instance identifier. This parameter is stored as a lowercase string.

Constraints:

Example: mydbinstance

" + }, + "AllocatedStorage":{ + "shape":"IntegerOptional", + "documentation":"

The amount of storage (in gigabytes) to allocate initially for the DB instance. Follow the allocation rules specified in CreateDBInstance.

Be sure to allocate enough memory for your new DB instance so that the restore operation can succeed. You can also allocate additional memory for future growth.

" + }, + "DBInstanceClass":{ + "shape":"String", + "documentation":"

The compute and memory capacity of the DB instance, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

Importing from Amazon S3 is not supported on the db.t2.micro DB instance class.

" + }, + "Engine":{ + "shape":"String", + "documentation":"

The name of the database engine to be used for this instance.

Valid Values: mysql

" + }, + "MasterUsername":{ + "shape":"String", + "documentation":"

The name for the master user.

Constraints:

" + }, + "MasterUserPassword":{ + "shape":"String", + "documentation":"

The password for the master user. The password can include any printable ASCII character except \"/\", \"\"\", or \"@\".

Constraints: Must contain from 8 to 41 characters.

" + }, + "DBSecurityGroups":{ + "shape":"DBSecurityGroupNameList", + "documentation":"

A list of DB security groups to associate with this DB instance.

Default: The default DB security group for the database engine.

" + }, + "VpcSecurityGroupIds":{ + "shape":"VpcSecurityGroupIdList", + "documentation":"

A list of VPC security groups to associate with this DB instance.

" + }, + "AvailabilityZone":{ + "shape":"String", + "documentation":"

The Availability Zone that the DB instance is created in. For information about AWS Regions and Availability Zones, see Regions and Availability Zones.

Default: A random, system-chosen Availability Zone in the endpoint's AWS Region.

Example: us-east-1d

Constraint: The AvailabilityZone parameter can't be specified if the MultiAZ parameter is set to true. The specified Availability Zone must be in the same AWS Region as the current endpoint.

" + }, + "DBSubnetGroupName":{ + "shape":"String", + "documentation":"

A DB subnet group to associate with this DB instance.

" + }, + "PreferredMaintenanceWindow":{ + "shape":"String", + "documentation":"

The time range each week during which system maintenance can occur, in Universal Coordinated Time (UTC). For more information, see Amazon RDS Maintenance Window.

Constraints:

" + }, + "DBParameterGroupName":{ + "shape":"String", + "documentation":"

The name of the DB parameter group to associate with this DB instance. If this argument is omitted, the default parameter group for the specified engine is used.

" + }, + "BackupRetentionPeriod":{ + "shape":"IntegerOptional", + "documentation":"

The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. For more information, see CreateDBInstance.

" + }, + "PreferredBackupWindow":{ + "shape":"String", + "documentation":"

The time range each day during which automated backups are created if automated backups are enabled. For more information, see The Backup Window.

Constraints:

" + }, + "Port":{ + "shape":"IntegerOptional", + "documentation":"

The port number on which the database accepts connections.

Type: Integer

Valid Values: 1150-65535

Default: 3306

" + }, + "MultiAZ":{ + "shape":"BooleanOptional", + "documentation":"

Specifies whether the DB instance is a Multi-AZ deployment. If MultiAZ is set to true, you can't set the AvailabilityZone parameter.

" + }, + "EngineVersion":{ + "shape":"String", + "documentation":"

The version number of the database engine to use. Choose the latest minor version of your database engine as specified in CreateDBInstance.

" + }, + "AutoMinorVersionUpgrade":{ + "shape":"BooleanOptional", + "documentation":"

True to indicate that minor engine upgrades are applied automatically to the DB instance during the maintenance window, and otherwise false.

Default: true

" + }, + "LicenseModel":{ + "shape":"String", + "documentation":"

The license model for this DB instance. Use general-public-license.

" + }, + "Iops":{ + "shape":"IntegerOptional", + "documentation":"

The amount of Provisioned IOPS (input/output operations per second) to allocate initially for the DB instance. For information about valid Iops values, see see Amazon RDS Provisioned IOPS Storage to Improve Performance.

" + }, + "OptionGroupName":{ + "shape":"String", + "documentation":"

The name of the option group to associate with this DB instance. If this argument is omitted, the default option group for the specified engine is used.

" + }, + "PubliclyAccessible":{ + "shape":"BooleanOptional", + "documentation":"

Specifies whether the DB instance is publicly accessible or not. For more information, see CreateDBInstance.

" + }, + "Tags":{ + "shape":"TagList", + "documentation":"

A list of tags to associate with this DB instance. For more information, see Tagging Amazon RDS Resources.

" + }, + "StorageType":{ + "shape":"String", + "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + }, + "StorageEncrypted":{ + "shape":"BooleanOptional", + "documentation":"

Specifies whether the new DB instance is encrypted or not.

" + }, + "KmsKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for an encrypted DB instance.

The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB instance with the same AWS account that owns the KMS encryption key used to encrypt the new DB instance, then you can use the KMS key alias instead of the ARN for the KM encryption key.

If the StorageEncrypted parameter is true, and you do not specify a value for the KmsKeyId parameter, then Amazon RDS will use your default encryption key. AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.

" + }, + "CopyTagsToSnapshot":{ + "shape":"BooleanOptional", + "documentation":"

True to copy all tags from the DB instance to snapshots of the DB instance, and otherwise false.

Default: false.

" + }, + "MonitoringInterval":{ + "shape":"IntegerOptional", + "documentation":"

The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB instance. To disable collecting Enhanced Monitoring metrics, specify 0.

If MonitoringRoleArn is specified, then you must also set MonitoringInterval to a value other than 0.

Valid Values: 0, 1, 5, 10, 15, 30, 60

Default: 0

" + }, + "MonitoringRoleArn":{ + "shape":"String", + "documentation":"

The ARN for the IAM role that permits RDS to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess. For information on creating a monitoring role, see Setting Up and Enabling Enhanced Monitoring.

If MonitoringInterval is set to a value other than 0, then you must supply a MonitoringRoleArn value.

" + }, + "EnableIAMDatabaseAuthentication":{ + "shape":"BooleanOptional", + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

Default: false

" + }, + "SourceEngine":{ + "shape":"String", + "documentation":"

The name of the engine of your source database.

Valid Values: mysql

" + }, + "SourceEngineVersion":{ + "shape":"String", + "documentation":"

The engine version of your source database.

Valid Values: 5.6

" + }, + "S3BucketName":{ + "shape":"String", + "documentation":"

The name of your Amazon S3 bucket that contains your database backup file.

" + }, + "S3Prefix":{ + "shape":"String", + "documentation":"

The prefix of your Amazon S3 bucket.

" + }, + "S3IngestionRoleArn":{ + "shape":"String", + "documentation":"

An AWS Identity and Access Management (IAM) role to allow Amazon RDS to access your Amazon S3 bucket.

" + }, + "EnablePerformanceInsights":{ + "shape":"BooleanOptional", + "documentation":"

True to enable Performance Insights for the DB instance, and otherwise false.

" + }, + "PerformanceInsightsKMSKeyId":{ + "shape":"String", + "documentation":"

The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), the KMS key identifier, or the KMS key alias for the KMS encryption key.

" + } + } + }, + "RestoreDBInstanceFromS3Result":{ + "type":"structure", + "members":{ + "DBInstance":{"shape":"DBInstance"} + } + }, "RestoreDBInstanceToPointInTimeMessage":{ "type":"structure", "required":[ @@ -7749,11 +8110,11 @@ "members":{ "SourceDBInstanceIdentifier":{ "shape":"String", - "documentation":"

The identifier of the source DB instance from which to restore.

Constraints:

" + "documentation":"

The identifier of the source DB instance from which to restore.

Constraints:

" }, "TargetDBInstanceIdentifier":{ "shape":"String", - "documentation":"

The name of the new database instance to be created.

Constraints:

" + "documentation":"

The name of the new DB instance to be created.

Constraints:

" }, "RestoreTime":{ "shape":"TStamp", @@ -7765,7 +8126,7 @@ }, "DBInstanceClass":{ "shape":"String", - "documentation":"

The compute and memory capacity of the Amazon RDS DB instance.

Valid Values: db.t1.micro | db.m1.small | db.m1.medium | db.m1.large | db.m1.xlarge | db.m2.2xlarge | db.m2.4xlarge | db.m3.medium | db.m3.large | db.m3.xlarge | db.m3.2xlarge | db.m4.large | db.m4.xlarge | db.m4.2xlarge | db.m4.4xlarge | db.m4.10xlarge | db.r3.large | db.r3.xlarge | db.r3.2xlarge | db.r3.4xlarge | db.r3.8xlarge | db.t2.micro | db.t2.small | db.t2.medium | db.t2.large

Default: The same DBInstanceClass as the original DB instance.

" + "documentation":"

The compute and memory capacity of the Amazon RDS DB instance, for example, db.m4.large. Not all DB instance classes are available in all AWS Regions, or for all database engines. For the full list of DB instance classes, and availability for your engine, see DB Instance Class in the Amazon RDS User Guide.

Default: The same DBInstanceClass as the original DB instance.

" }, "Port":{ "shape":"IntegerOptional", @@ -7773,23 +8134,23 @@ }, "AvailabilityZone":{ "shape":"String", - "documentation":"

The EC2 Availability Zone that the database instance will be created in.

Default: A random, system-chosen Availability Zone.

Constraint: You cannot specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

Example: us-east-1a

" + "documentation":"

The EC2 Availability Zone that the DB instance is created in.

Default: A random, system-chosen Availability Zone.

Constraint: You can't specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

Example: us-east-1a

" }, "DBSubnetGroupName":{ "shape":"String", - "documentation":"

The DB subnet group name to use for the new instance.

Constraints: Must contain no more than 255 alphanumeric characters, periods, underscores, spaces, or hyphens. Must not be default.

Example: mySubnetgroup

" + "documentation":"

The DB subnet group name to use for the new instance.

Constraints: If supplied, must match the name of an existing DBSubnetGroup.

Example: mySubnetgroup

" }, "MultiAZ":{ "shape":"BooleanOptional", - "documentation":"

Specifies if the DB instance is a Multi-AZ deployment.

Constraint: You cannot specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" + "documentation":"

Specifies if the DB instance is a Multi-AZ deployment.

Constraint: You can't specify the AvailabilityZone parameter if the MultiAZ parameter is set to true.

" }, "PubliclyAccessible":{ "shape":"BooleanOptional", - "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance will be private.

" + "documentation":"

Specifies the accessibility options for the DB instance. A value of true specifies an Internet-facing instance with a publicly resolvable DNS name, which resolves to a public IP address. A value of false specifies an internal instance with a DNS name that resolves to a private IP address.

Default: The default behavior varies depending on whether a VPC has been requested or not. The following list shows the default behavior in each case.

If no DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is publicly accessible. If a specific DB subnet group has been specified as part of the request and the PubliclyAccessible value has not been set, the DB instance is private.

" }, "AutoMinorVersionUpgrade":{ "shape":"BooleanOptional", - "documentation":"

Indicates that minor version upgrades will be applied automatically to the DB instance during the maintenance window.

" + "documentation":"

Indicates that minor version upgrades are applied automatically to the DB instance during the maintenance window.

" }, "LicenseModel":{ "shape":"String", @@ -7801,7 +8162,7 @@ }, "Engine":{ "shape":"String", - "documentation":"

The database engine to use for the new instance.

Default: The same as source

Constraint: Must be compatible with the engine of the source

Valid Values: MySQL | mariadb | oracle-se1 | oracle-se | oracle-ee | sqlserver-ee | sqlserver-se | sqlserver-ex | sqlserver-web | postgres | aurora

" + "documentation":"

The database engine to use for the new instance.

Default: The same as source

Constraint: Must be compatible with the engine of the source

Valid Values:

" }, "Iops":{ "shape":"IntegerOptional", @@ -7809,24 +8170,24 @@ }, "OptionGroupName":{ "shape":"String", - "documentation":"

The name of the option group to be used for the restored DB instance.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, cannot be removed from an option group, and that option group cannot be removed from a DB instance once it is associated with a DB instance

" + "documentation":"

The name of the option group to be used for the restored DB instance.

Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance

" }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", - "documentation":"

True to copy all tags from the restored DB instance to snapshots of the DB instance; otherwise false. The default is false.

" + "documentation":"

True to copy all tags from the restored DB instance to snapshots of the DB instance, and otherwise false. The default is false.

" }, "Tags":{"shape":"TagList"}, "StorageType":{ "shape":"String", - "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified; otherwise standard

" + "documentation":"

Specifies the storage type to be associated with the DB instance.

Valid values: standard | gp2 | io1

If you specify io1, you must also include a value for the Iops parameter.

Default: io1 if the Iops parameter is specified, otherwise standard

" }, "TdeCredentialArn":{ "shape":"String", - "documentation":"

The ARN from the Key Store with which to associate the instance for TDE encryption.

" + "documentation":"

The ARN from the key store with which to associate the instance for TDE encryption.

" }, "TdeCredentialPassword":{ "shape":"String", - "documentation":"

The password for the given ARN from the Key Store in order to access the device.

" + "documentation":"

The password for the given ARN from the key store in order to access the device.

" }, "Domain":{ "shape":"String", @@ -7838,7 +8199,7 @@ }, "EnableIAMDatabaseAuthentication":{ "shape":"BooleanOptional", - "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts; otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" + "documentation":"

True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.

You can enable IAM database authentication for the following database engines

Default: false

" } }, "documentation":"

" @@ -7859,7 +8220,7 @@ }, "CIDRIP":{ "shape":"String", - "documentation":"

The IP range to revoke access from. Must be a valid CIDR range. If CIDRIP is specified, EC2SecurityGroupName, EC2SecurityGroupId and EC2SecurityGroupOwnerId cannot be provided.

" + "documentation":"

The IP range to revoke access from. Must be a valid CIDR range. If CIDRIP is specified, EC2SecurityGroupName, EC2SecurityGroupId and EC2SecurityGroupOwnerId can't be provided.

" }, "EC2SecurityGroupName":{ "shape":"String", @@ -7966,15 +8327,15 @@ "members":{ "RegionName":{ "shape":"String", - "documentation":"

The source region name.

" + "documentation":"

The name of the source AWS Region.

" }, "Endpoint":{ "shape":"String", - "documentation":"

The source region endpoint.

" + "documentation":"

The endpoint for the source AWS Region endpoint.

" }, "Status":{ "shape":"String", - "documentation":"

The status of the source region.

" + "documentation":"

The status of the source AWS Region.

" } }, "documentation":"

Contains an AWS Region name as the result of a successful call to the DescribeSourceRegions action.

" @@ -7995,7 +8356,7 @@ }, "SourceRegions":{ "shape":"SourceRegionList", - "documentation":"

A list of SourceRegion instances that contains each source AWS Region that the current region can get a Read Replica or a DB snapshot from.

" + "documentation":"

A list of SourceRegion instances that contains each source AWS Region that the current AWS Region can get a Read Replica or a DB snapshot from.

" } }, "documentation":"

Contains the result of a successful invocation of the DescribeSourceRegions action.

" @@ -8169,11 +8530,11 @@ "members":{ "Key":{ "shape":"String", - "documentation":"

A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").

" + "documentation":"

A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").

" }, "Value":{ "shape":"String", - "documentation":"

A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").

" + "documentation":"

A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with \"aws:\" or \"rds:\". The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', '/', '=', '+', '-' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-]*)$\").

" } }, "documentation":"

Metadata assigned to an Amazon RDS resource consisting of a key-value pair.

" @@ -8184,7 +8545,7 @@ "shape":"Tag", "locationName":"Tag" }, - "documentation":"

A list of tags.

" + "documentation":"

A list of tags. For more information, see Tagging Amazon RDS Resources.

" }, "TagListMessage":{ "type":"structure", @@ -8223,15 +8584,55 @@ }, "AutoUpgrade":{ "shape":"Boolean", - "documentation":"

A value that indicates whether the target version will be applied to any source DB instances that have AutoMinorVersionUpgrade set to true.

" + "documentation":"

A value that indicates whether the target version is applied to any source DB instances that have AutoMinorVersionUpgrade set to true.

" }, "IsMajorVersionUpgrade":{ "shape":"Boolean", - "documentation":"

A value that indicates whether a database engine will be upgraded to a major version.

" + "documentation":"

A value that indicates whether a database engine is upgraded to a major version.

" } }, "documentation":"

The version of the database engine that a DB instance can be upgraded to.

" }, + "ValidDBInstanceModificationsMessage":{ + "type":"structure", + "members":{ + "Storage":{ + "shape":"ValidStorageOptionsList", + "documentation":"

Valid storage options for your DB instance.

" + } + }, + "documentation":"

Information about valid modifications that you can make to your DB instance. Contains the result of a successful call to the DescribeValidDBInstanceModifications action. You can use this information when you call ModifyDBInstance.

", + "wrapper":true + }, + "ValidStorageOptions":{ + "type":"structure", + "members":{ + "StorageType":{ + "shape":"String", + "documentation":"

The valid storage types for your DB instance. For example, gp2, io1.

" + }, + "StorageSize":{ + "shape":"RangeList", + "documentation":"

The valid range of storage in gigabytes. For example, 100 to 6144.

" + }, + "ProvisionedIops":{ + "shape":"RangeList", + "documentation":"

The valid range of provisioned IOPS. For example, 1000-20000.

" + }, + "IopsToStorageRatio":{ + "shape":"DoubleRangeList", + "documentation":"

The valid range of Provisioned IOPS to gigabytes of storage multiplier. For example, 3-10, which means that provisioned IOPS can be between 3 and 10 times storage.

" + } + }, + "documentation":"

Information about valid modifications that you can make to your DB instance. Contains the result of a successful call to the DescribeValidDBInstanceModifications action.

" + }, + "ValidStorageOptionsList":{ + "type":"list", + "member":{ + "shape":"ValidStorageOptions", + "locationName":"ValidStorageOptions" + } + }, "ValidUpgradeTargetList":{ "type":"list", "member":{ @@ -8268,5 +8669,5 @@ } } }, - "documentation":"Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique.

Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your database instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use.

This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.

Amazon RDS API Reference

Amazon RDS User Guide

" + "documentation":"Amazon Relational Database Service

Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique.

Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use.

This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.

Amazon RDS API Reference

Amazon RDS User Guide

" } diff --git a/services/rds/src/main/resources/codegen-resources/waiters-2.json b/services/rds/src/main/resources/codegen-resources/waiters-2.json index e75f03b2aa85..6a223a583c51 100644 --- a/services/rds/src/main/resources/codegen-resources/waiters-2.json +++ b/services/rds/src/main/resources/codegen-resources/waiters-2.json @@ -85,6 +85,91 @@ "argument": "DBInstances[].DBInstanceStatus" } ] + }, + "DBSnapshotAvailable": { + "delay": 30, + "operation": "DescribeDBSnapshots", + "maxAttempts": 60, + "acceptors": [ + { + "expected": "available", + "matcher": "pathAll", + "state": "success", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "deleted", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "deleting", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "failed", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "incompatible-restore", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "incompatible-parameters", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + } + ] + }, + "DBSnapshotDeleted": { + "delay": 30, + "operation": "DescribeDBSnapshots", + "maxAttempts": 60, + "acceptors": [ + { + "expected": "deleted", + "matcher": "pathAll", + "state": "success", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "DBSnapshotNotFound", + "matcher": "error", + "state": "success" + }, + { + "expected": "creating", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "modifying", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "rebooting", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + }, + { + "expected": "resetting-master-credentials", + "matcher": "pathAny", + "state": "failure", + "argument": "DBSnapshots[].Status" + } + ] } } } diff --git a/services/redshift/src/main/resources/codegen-resources/service-2.json b/services/redshift/src/main/resources/codegen-resources/service-2.json index fa278058b879..c919eb6a4402 100644 --- a/services/redshift/src/main/resources/codegen-resources/service-2.json +++ b/services/redshift/src/main/resources/codegen-resources/service-2.json @@ -577,9 +577,10 @@ "resultWrapper":"DescribeEventSubscriptionsResult" }, "errors":[ - {"shape":"SubscriptionNotFoundFault"} + {"shape":"SubscriptionNotFoundFault"}, + {"shape":"InvalidTagFault"} ], - "documentation":"

Lists descriptions of all the Amazon Redshift event notifications subscription for a customer account. If you specify a subscription name, lists the description for that subscription.

" + "documentation":"

Lists descriptions of all the Amazon Redshift event notification subscriptions for a customer account. If you specify a subscription name, lists the description for that subscription.

If you specify both tag keys and tag values in the same request, Amazon Redshift returns all event notification subscriptions that match any combination of the specified keys and values. For example, if you have owner and environment for tag keys, and admin and test for tag values, all subscriptions that have any combination of those values are returned.

If both tag keys and values are omitted from the request, subscriptions are returned regardless of whether they have tag keys or values associated with them.

" }, "DescribeEvents":{ "name":"DescribeEvents", @@ -855,7 +856,7 @@ {"shape":"ClusterNotFoundFault"}, {"shape":"UnsupportedOperationFault"} ], - "documentation":"

Returns a database user name and temporary password with temporary authorization to log in to an Amazon Redshift database. The action returns the database user name prefixed with IAM: if AutoCreate is False or IAMA: if AutoCreate is True. You can optionally specify one or more database user groups that the user will join at log in. By default, the temporary credentials expire in 900 seconds. You can optionally specify a duration between 900 seconds (15 minutes) and 3600 seconds (60 minutes). For more information, see Generating IAM Database User Credentials in the Amazon Redshift Cluster Management Guide.

The IAM user or role that executes GetClusterCredentials must have an IAM policy attached that allows the redshift:GetClusterCredentials action with access to the dbuser resource on the cluster. The user name specified for dbuser in the IAM policy and the user name specified for the DbUser parameter must match.

If the DbGroups parameter is specified, the IAM policy must allow the redshift:JoinGroup action with access to the listed dbgroups.

In addition, if the AutoCreate parameter is set to True, then the policy must include the redshift:CreateClusterUser privilege.

If the DbName parameter is specified, the IAM policy must allow access to the resource dbname for the specified database name.

" + "documentation":"

Returns a database user name and temporary password with temporary authorization to log on to an Amazon Redshift database. The action returns the database user name prefixed with IAM: if AutoCreate is False or IAMA: if AutoCreate is True. You can optionally specify one or more database user groups that the user will join at log on. By default, the temporary credentials expire in 900 seconds. You can optionally specify a duration between 900 seconds (15 minutes) and 3600 seconds (60 minutes). For more information, see Using IAM Authentication to Generate Database User Credentials in the Amazon Redshift Cluster Management Guide.

The AWS Identity and Access Management (IAM)user or role that executes GetClusterCredentials must have an IAM policy attached that allows access to all necessary actions and resources. For more information about permissions, see Resource Policies for GetClusterCredentials in the Amazon Redshift Cluster Management Guide.

If the DbGroups parameter is specified, the IAM policy must allow the redshift:JoinGroup action with access to the listed dbgroups.

In addition, if the AutoCreate parameter is set to True, then the policy must include the redshift:CreateClusterUser privilege.

If the DbName parameter is specified, the IAM policy must allow access to the resource dbname for the specified database name.

" }, "ModifyCluster":{ "name":"ModifyCluster", @@ -1467,7 +1468,7 @@ "members":{ "DbUser":{ "shape":"String", - "documentation":"

A database user name that is authorized to log on to the database DbName using the password DbPassword. If the DbGroups parameter is specifed, DbUser is added to the listed groups for the current session. The user name is prefixed with IAM: for an existing user name or IAMA: if the user was auto-created.

" + "documentation":"

A database user name that is authorized to log on to the database DbName using the password DbPassword. If the specified DbUser exists in the database, the new user name has the same database privileges as the the user named in DbUser. By default, the user is added to PUBLIC. If the DbGroups parameter is specifed, DbUser is added to the listed groups for any sessions created using these credentials.

" }, "DbPassword":{ "shape":"SensitiveString", @@ -1475,10 +1476,10 @@ }, "Expiration":{ "shape":"TStamp", - "documentation":"

The date and time DbPassword expires.

" + "documentation":"

The date and time the password in DbPassword expires.

" } }, - "documentation":"

Temporary credentials with authorization to log in to an Amazon Redshift database.

" + "documentation":"

Temporary credentials with authorization to log on to an Amazon Redshift database.

" }, "ClusterIamRole":{ "type":"structure", @@ -2903,7 +2904,15 @@ }, "Marker":{ "shape":"String", - "documentation":"

An optional parameter that specifies the starting point to return a set of response records. When the results of a DescribeEventSubscriptions request exceed the value specified in MaxRecords, AWS returns a value in the Marker field of the response. You can retrieve the next set of response records by providing the returned marker value in the Marker parameter and retrying the request.

" + "documentation":"

An optional parameter that specifies the starting point to return a set of response records. When the results of a DescribeEventSubscriptions request exceed the value specified in MaxRecords, AWS returns a value in the Marker field of the response. You can retrieve the next set of response records by providing the returned marker value in the Marker parameter and retrying the request.

" + }, + "TagKeys":{ + "shape":"TagKeyList", + "documentation":"

A tag key or keys for which you want to return all matching event notification subscriptions that are associated with the specified key or keys. For example, suppose that you have subscriptions that are tagged with keys called owner and environment. If you specify both of these tag keys in the request, Amazon Redshift returns a response with the subscriptions that have either or both of these tag keys associated with them.

" + }, + "TagValues":{ + "shape":"TagValueList", + "documentation":"

A tag value or values for which you want to return all matching event notification subscriptions that are associated with the specified tag value or values. For example, suppose that you have subscriptions that are tagged with values called admin and test. If you specify both of these tag values in the request, Amazon Redshift returns a response with the subscriptions that have either or both of these tag values associated with them.

" } }, "documentation":"

" @@ -3131,7 +3140,7 @@ }, "ResourceType":{ "shape":"String", - "documentation":"

The type of resource with which you want to view tags. Valid resource types are:

For more information about Amazon Redshift resource types and constructing ARNs, go to Constructing an Amazon Redshift Amazon Resource Name (ARN) in the Amazon Redshift Cluster Management Guide.

" + "documentation":"

The type of resource with which you want to view tags. Valid resource types are:

For more information about Amazon Redshift resource types and constructing ARNs, go to Specifying Policy Elements: Actions, Effects, Resources, and Principals in the Amazon Redshift Cluster Management Guide.

" }, "MaxRecords":{ "shape":"IntegerOptional", @@ -3510,11 +3519,11 @@ "members":{ "DbUser":{ "shape":"String", - "documentation":"

The name of a database user. If a user name matching DbUser exists in the database, the temporary user credentials have the same permissions as the existing user. If DbUser doesn't exist in the database and Autocreate is True, a new user is created using the value for DbUser with PUBLIC permissions. If a database user matching the value for DbUser doesn't exist and Autocreate is False, then the command succeeds but the connection attempt will fail because the user doesn't exist in the database.

For more information, see CREATE USER in the Amazon Redshift Database Developer Guide.

Constraints:

" + "documentation":"

The name of a database user. If a user name matching DbUser exists in the database, the temporary user credentials have the same permissions as the existing user. If DbUser doesn't exist in the database and Autocreate is True, a new user is created using the value for DbUser with PUBLIC permissions. If a database user matching the value for DbUser doesn't exist and Autocreate is False, then the command succeeds but the connection attempt will fail because the user doesn't exist in the database.

For more information, see CREATE USER in the Amazon Redshift Database Developer Guide.

Constraints:

" }, "DbName":{ "shape":"String", - "documentation":"

The name of a database that DbUser is authorized to log on to. If DbName is not specified, DbUser can log in to any existing database.

Constraints:

" + "documentation":"

The name of a database that DbUser is authorized to log on to. If DbName is not specified, DbUser can log on to any existing database.

Constraints:

" }, "ClusterIdentifier":{ "shape":"String", @@ -3526,11 +3535,11 @@ }, "AutoCreate":{ "shape":"BooleanOptional", - "documentation":"

Create a database user with the name specified for DbUser if one does not exist.

" + "documentation":"

Create a database user with the name specified for the user named in DbUser if one does not exist.

" }, "DbGroups":{ "shape":"DbGroupList", - "documentation":"

A list of the names of existing database groups that DbUser will join for the current session. If not specified, the new user is added only to PUBLIC.

" + "documentation":"

A list of the names of existing database groups that the user named in DbUser will join for the current session, in addition to any group memberships for an existing user. If not specified, a new user is added only to PUBLIC.

Database group name constraints

" } }, "documentation":"

The request parameters to get cluster credentials.

" @@ -5661,7 +5670,7 @@ }, "ResourceType":{ "shape":"String", - "documentation":"

The type of resource with which the tag is associated. Valid resource types are:

For more information about Amazon Redshift resource types and constructing ARNs, go to Constructing an Amazon Redshift Amazon Resource Name (ARN) in the Amazon Redshift Cluster Management Guide.

" + "documentation":"

The type of resource with which the tag is associated. Valid resource types are:

For more information about Amazon Redshift resource types and constructing ARNs, go to Constructing an Amazon Redshift Amazon Resource Name (ARN) in the Amazon Redshift Cluster Management Guide.

" } }, "documentation":"

A tag and its associated resource.

" diff --git a/services/rekognition/src/main/resources/codegen-resources/examples-1.json b/services/rekognition/src/main/resources/codegen-resources/examples-1.json index 20b032800571..039e04d60f34 100644 --- a/services/rekognition/src/main/resources/codegen-resources/examples-1.json +++ b/services/rekognition/src/main/resources/codegen-resources/examples-1.json @@ -139,27 +139,27 @@ "Confidence": 100, "Landmarks": [ { - "Type": "EYE_LEFT", + "Type": "eyeLeft", "X": 0.6394737362861633, "Y": 0.40819624066352844 }, { - "Type": "EYE_RIGHT", + "Type": "eyeRight", "X": 0.7266660928726196, "Y": 0.41039225459098816 }, { - "Type": "NOSE_LEFT", + "Type": "eyeRight", "X": 0.6912462115287781, "Y": 0.44240960478782654 }, { - "Type": "MOUTH_DOWN", + "Type": "mouthDown", "X": 0.6306198239326477, "Y": 0.46700039505958557 }, { - "Type": "MOUTH_UP", + "Type": "mouthUp", "X": 0.7215608954429626, "Y": 0.47114261984825134 } @@ -262,27 +262,27 @@ "Confidence": 99.9991226196289, "Landmarks": [ { - "Type": "EYE_LEFT", + "Type": "eyeLeft", "X": 0.3976764678955078, "Y": 0.6248345971107483 }, { - "Type": "EYE_RIGHT", + "Type": "eyeRight", "X": 0.4810936450958252, "Y": 0.6317117214202881 }, { - "Type": "NOSE_LEFT", + "Type": "noseLeft", "X": 0.41986238956451416, "Y": 0.7111940383911133 }, { - "Type": "MOUTH_DOWN", + "Type": "mouthDown", "X": 0.40525302290916443, "Y": 0.7497701048851013 }, { - "Type": "MOUTH_UP", + "Type": "mouthUp", "X": 0.4753248989582062, "Y": 0.7558549642562866 } @@ -320,27 +320,27 @@ "Confidence": 99.99950408935547, "Landmarks": [ { - "Type": "EYE_LEFT", + "Type": "eyeLeft", "X": 0.6006892323493958, "Y": 0.290842205286026 }, { - "Type": "EYE_RIGHT", + "Type": "eyeRight", "X": 0.6808141469955444, "Y": 0.29609042406082153 }, { - "Type": "NOSE_LEFT", + "Type": "noseLeft", "X": 0.6395332217216492, "Y": 0.3522595763206482 }, { - "Type": "MOUTH_DOWN", + "Type": "mouthDown", "X": 0.5892083048820496, "Y": 0.38689887523651123 }, { - "Type": "MOUTH_UP", + "Type": "mouthUp", "X": 0.674560010433197, "Y": 0.394125759601593 } diff --git a/services/rekognition/src/main/resources/codegen-resources/service-2.json b/services/rekognition/src/main/resources/codegen-resources/service-2.json index 3340b41471a0..c85898aa233a 100644 --- a/services/rekognition/src/main/resources/codegen-resources/service-2.json +++ b/services/rekognition/src/main/resources/codegen-resources/service-2.json @@ -888,8 +888,8 @@ "GenderType":{ "type":"string", "enum":[ - "MALE", - "FEMALE" + "Male", + "Female" ] }, "GetCelebrityInfoRequest":{ @@ -1060,11 +1060,11 @@ }, "X":{ "shape":"Float", - "documentation":"

x-coordinate from the top left of the landmark expressed as the ration of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

" + "documentation":"

x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.

" }, "Y":{ "shape":"Float", - "documentation":"

y-coordinate from the top left of the landmark expressed as the ration of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

" + "documentation":"

y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.

" } }, "documentation":"

Indicates the location of the landmark on the face.

" @@ -1072,31 +1072,31 @@ "LandmarkType":{ "type":"string", "enum":[ - "EYE_LEFT", - "EYE_RIGHT", - "NOSE", - "MOUTH_LEFT", - "MOUTH_RIGHT", - "LEFT_EYEBROW_LEFT", - "LEFT_EYEBROW_RIGHT", - "LEFT_EYEBROW_UP", - "RIGHT_EYEBROW_LEFT", - "RIGHT_EYEBROW_RIGHT", - "RIGHT_EYEBROW_UP", - "LEFT_EYE_LEFT", - "LEFT_EYE_RIGHT", - "LEFT_EYE_UP", - "LEFT_EYE_DOWN", - "RIGHT_EYE_LEFT", - "RIGHT_EYE_RIGHT", - "RIGHT_EYE_UP", - "RIGHT_EYE_DOWN", - "NOSE_LEFT", - "NOSE_RIGHT", - "MOUTH_UP", - "MOUTH_DOWN", - "LEFT_PUPIL", - "RIGHT_PUPIL" + "eyeLeft", + "eyeRight", + "nose", + "mouthLeft", + "mouthRight", + "leftEyeBrowLeft", + "leftEyeBrowRight", + "leftEyeBrowUp", + "rightEyeBrowLeft", + "rightEyeBrowRight", + "rightEyeBrowUp", + "leftEyeLeft", + "leftEyeRight", + "leftEyeUp", + "leftEyeDown", + "rightEyeLeft", + "rightEyeRight", + "rightEyeUp", + "rightEyeDown", + "noseLeft", + "noseRight", + "mouthUp", + "mouthDown", + "leftPupil", + "rightPupil" ] }, "Landmarks":{ diff --git a/services/route53/src/main/resources/codegen-resources/route53/service-2.json b/services/route53/src/main/resources/codegen-resources/route53/service-2.json index e46b712b3e81..c447dce73dc9 100644 --- a/services/route53/src/main/resources/codegen-resources/route53/service-2.json +++ b/services/route53/src/main/resources/codegen-resources/route53/service-2.json @@ -122,6 +122,29 @@ ], "documentation":"

Creates a new public hosted zone, which you use to specify how the Domain Name System (DNS) routes traffic on the Internet for a domain, such as example.com, and its subdomains.

You can't convert a public hosted zones to a private hosted zone or vice versa. Instead, you must create a new hosted zone with the same name and create new resource record sets.

For more information about charges for hosted zones, see Amazon Route 53 Pricing.

Note the following:

When you submit a CreateHostedZone request, the initial status of the hosted zone is PENDING. This means that the NS and SOA records are not yet available on all Amazon Route 53 DNS servers. When the NS and SOA records are available, the status of the zone changes to INSYNC.

" }, + "CreateQueryLoggingConfig":{ + "name":"CreateQueryLoggingConfig", + "http":{ + "method":"POST", + "requestUri":"/2013-04-01/queryloggingconfig", + "responseCode":201 + }, + "input":{ + "shape":"CreateQueryLoggingConfigRequest", + "locationName":"CreateQueryLoggingConfigRequest", + "xmlNamespace":{"uri":"https://route53.amazonaws.com/doc/2013-04-01/"} + }, + "output":{"shape":"CreateQueryLoggingConfigResponse"}, + "errors":[ + {"shape":"ConcurrentModification"}, + {"shape":"NoSuchHostedZone"}, + {"shape":"NoSuchCloudWatchLogsLogGroup"}, + {"shape":"InvalidInput"}, + {"shape":"QueryLoggingConfigAlreadyExists"}, + {"shape":"InsufficientCloudWatchLogsResourcePolicy"} + ], + "documentation":"

Creates a configuration for DNS query logging. After you create a query logging configuration, Amazon Route 53 begins to publish log data to an Amazon CloudWatch Logs log group.

DNS query logs contain information about the queries that Amazon Route 53 receives for a specified public hosted zone, such as the following:

Log Group and Resource Policy

Before you create a query logging configuration, perform the following operations.

If you create a query logging configuration using the Amazon Route 53 console, Amazon Route 53 performs these operations automatically.

  1. Create a CloudWatch Logs log group, and make note of the ARN, which you specify when you create a query logging configuration. Note the following:

    • You must create the log group in the us-east-1 region.

    • You must use the same AWS account to create the log group and the hosted zone that you want to configure query logging for.

    • When you create log groups for query logging, we recommend that you use a consistent prefix, for example:

      /aws/route53/hosted zone name

      In the next step, you'll create a resource policy, which controls access to one or more log groups and the associated AWS resources, such as Amazon Route 53 hosted zones. There's a limit on the number of resource policies that you can create, so we recommend that you use a consistent prefix so you can use the same resource policy for all the log groups that you create for query logging.

  2. Create a CloudWatch Logs resource policy, and give it the permissions that Amazon Route 53 needs to create log streams and to send query logs to log streams. For the value of Resource, specify the ARN for the log group that you created in the previous step. To use the same resource policy for all the CloudWatch Logs log groups that you created for query logging configurations, replace the hosted zone name with *, for example:

    arn:aws:logs:us-east-1:123412341234:log-group:/aws/route53/*

    You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the AWS CLI.

Log Streams and Edge Locations

When Amazon Route 53 finishes creating the configuration for DNS query logging, it does the following:

  • Creates a log stream for an edge location the first time that the edge location responds to DNS queries for the specified hosted zone. That log stream is used to log all queries that Amazon Route 53 responds to for that edge location.

  • Begins to send query logs to the applicable log stream.

The name of each log stream is in the following format:

hosted zone ID/edge location code

The edge location code is a three-letter code and an arbitrarily assigned number, for example, DFW3. The three-letter code typically corresponds with the International Air Transport Association airport code for an airport near the edge location. (These abbreviations might change in the future.) For a list of edge locations, see \"The Amazon Route 53 Global Network\" on the Amazon Route 53 Product Details page.

Queries That Are Logged

Query logs contain only the queries that DNS resolvers forward to Amazon Route 53. If a DNS resolver has already cached the response to a query (such as the IP address for a load balancer for example.com), the resolver will continue to return the cached response. It doesn't forward another query to Amazon Route 53 until the TTL for the corresponding resource record set expires. Depending on how many DNS queries are submitted for a resource record set, and depending on the TTL for that resource record set, query logs might contain information about only one query out of every several thousand queries that are submitted to DNS. For more information about how DNS works, see Routing Internet Traffic to Your Website or Web Application in the Amazon Route 53 Developer Guide.

Log File Format

For a list of the values in each query log and the format of each value, see Logging DNS Queries in the Amazon Route 53 Developer Guide.

Pricing

For information about charges for query logs, see Amazon CloudWatch Pricing.

How to Stop Logging

If you want Amazon Route 53 to stop sending query logs to CloudWatch Logs, delete the query logging configuration. For more information, see DeleteQueryLoggingConfig.

" + }, "CreateReusableDelegationSet":{ "name":"CreateReusableDelegationSet", "http":{ @@ -263,6 +286,21 @@ ], "documentation":"

Deletes a hosted zone.

If the name servers for the hosted zone are associated with a domain and if you want to make the domain unavailable on the Internet, we recommend that you delete the name servers from the domain to prevent future DNS queries from possibly being misrouted. If the domain is registered with Amazon Route 53, see UpdateDomainNameservers. If the domain is registered with another registrar, use the method provided by the registrar to delete name servers for the domain.

Some domain registries don't allow you to remove all of the name servers for a domain. If the registry for your domain requires one or more name servers, we recommend that you delete the hosted zone only if you transfer DNS service to another service provider, and you replace the name servers for the domain with name servers from the new provider.

You can delete a hosted zone only if it contains only the default SOA record and NS resource record sets. If the hosted zone contains other resource record sets, you must delete them before you can delete the hosted zone. If you try to delete a hosted zone that contains other resource record sets, the request fails, and Amazon Route 53 returns a HostedZoneNotEmpty error. For information about deleting records from your hosted zone, see ChangeResourceRecordSets.

To verify that the hosted zone has been deleted, do one of the following:

" }, + "DeleteQueryLoggingConfig":{ + "name":"DeleteQueryLoggingConfig", + "http":{ + "method":"DELETE", + "requestUri":"/2013-04-01/queryloggingconfig/{Id}" + }, + "input":{"shape":"DeleteQueryLoggingConfigRequest"}, + "output":{"shape":"DeleteQueryLoggingConfigResponse"}, + "errors":[ + {"shape":"ConcurrentModification"}, + {"shape":"NoSuchQueryLoggingConfig"}, + {"shape":"InvalidInput"} + ], + "documentation":"

Deletes a configuration for DNS query logging. If you delete a configuration, Amazon Route 53 stops sending query logs to CloudWatch Logs. Amazon Route 53 doesn't delete any logs that are already in CloudWatch Logs.

For more information about DNS query logs, see CreateQueryLoggingConfig.

" + }, "DeleteReusableDelegationSet":{ "name":"DeleteReusableDelegationSet", "http":{ @@ -352,6 +390,19 @@ ], "documentation":"

Disassociates a VPC from a Amazon Route 53 private hosted zone.

You can't disassociate the last VPC from a private hosted zone.

You can't disassociate a VPC from a private hosted zone when only one VPC is associated with the hosted zone. You also can't convert a private hosted zone into a public hosted zone.

" }, + "GetAccountLimit":{ + "name":"GetAccountLimit", + "http":{ + "method":"GET", + "requestUri":"/2013-04-01/accountlimit/{Type}" + }, + "input":{"shape":"GetAccountLimitRequest"}, + "output":{"shape":"GetAccountLimitResponse"}, + "errors":[ + {"shape":"InvalidInput"} + ], + "documentation":"

Gets the specified limit for the current account, for example, the maximum number of health checks that you can create using the account.

For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case.

" + }, "GetChange":{ "name":"GetChange", "http":{ @@ -470,6 +521,35 @@ ], "documentation":"

Retrieves the number of hosted zones that are associated with the current AWS account.

" }, + "GetHostedZoneLimit":{ + "name":"GetHostedZoneLimit", + "http":{ + "method":"GET", + "requestUri":"/2013-04-01/hostedzonelimit/{Id}/{Type}" + }, + "input":{"shape":"GetHostedZoneLimitRequest"}, + "output":{"shape":"GetHostedZoneLimitResponse"}, + "errors":[ + {"shape":"NoSuchHostedZone"}, + {"shape":"InvalidInput"}, + {"shape":"HostedZoneNotPrivate"} + ], + "documentation":"

Gets the specified limit for a specified hosted zone, for example, the maximum number of records that you can create in the hosted zone.

For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case.

" + }, + "GetQueryLoggingConfig":{ + "name":"GetQueryLoggingConfig", + "http":{ + "method":"GET", + "requestUri":"/2013-04-01/queryloggingconfig/{Id}" + }, + "input":{"shape":"GetQueryLoggingConfigRequest"}, + "output":{"shape":"GetQueryLoggingConfigResponse"}, + "errors":[ + {"shape":"NoSuchQueryLoggingConfig"}, + {"shape":"InvalidInput"} + ], + "documentation":"

Gets information about a specified configuration for DNS query logging.

For more information about DNS query logs, see CreateQueryLoggingConfig and Logging DNS Queries.

" + }, "GetReusableDelegationSet":{ "name":"GetReusableDelegationSet", "http":{ @@ -485,6 +565,20 @@ ], "documentation":"

Retrieves information about a specified reusable delegation set, including the four name servers that are assigned to the delegation set.

" }, + "GetReusableDelegationSetLimit":{ + "name":"GetReusableDelegationSetLimit", + "http":{ + "method":"GET", + "requestUri":"/2013-04-01/reusabledelegationsetlimit/{Id}/{Type}" + }, + "input":{"shape":"GetReusableDelegationSetLimitRequest"}, + "output":{"shape":"GetReusableDelegationSetLimitResponse"}, + "errors":[ + {"shape":"InvalidInput"}, + {"shape":"NoSuchDelegationSet"} + ], + "documentation":"

Gets the maximum number of hosted zones that you can associate with the specified reusable delegation set.

For the default limit, see Limits in the Amazon Route 53 Developer Guide. To request a higher limit, open a case.

" + }, "GetTrafficPolicy":{ "name":"GetTrafficPolicy", "http":{ @@ -579,6 +673,21 @@ ], "documentation":"

Retrieves a list of your hosted zones in lexicographic order. The response includes a HostedZones child element for each hosted zone created by the current AWS account.

ListHostedZonesByName sorts hosted zones by name with the labels reversed. For example:

com.example.www.

Note the trailing dot, which can change the sort order in some circumstances.

If the domain name includes escape characters or Punycode, ListHostedZonesByName alphabetizes the domain name using the escaped or Punycoded value, which is the format that Amazon Route 53 saves in its database. For example, to create a hosted zone for exämple.com, you specify ex\\344mple.com for the domain name. ListHostedZonesByName alphabetizes it as:

com.ex\\344mple.

The labels are reversed and alphabetized using the escaped value. For more information about valid domain name formats, including internationalized domain names, see DNS Domain Name Format in the Amazon Route 53 Developer Guide.

Amazon Route 53 returns up to 100 items in each response. If you have a lot of hosted zones, use the MaxItems parameter to list them in groups of up to 100. The response includes values that help navigate from one group of MaxItems hosted zones to the next:

" }, + "ListQueryLoggingConfigs":{ + "name":"ListQueryLoggingConfigs", + "http":{ + "method":"GET", + "requestUri":"/2013-04-01/queryloggingconfig" + }, + "input":{"shape":"ListQueryLoggingConfigsRequest"}, + "output":{"shape":"ListQueryLoggingConfigsResponse"}, + "errors":[ + {"shape":"InvalidInput"}, + {"shape":"InvalidPaginationToken"}, + {"shape":"NoSuchHostedZone"} + ], + "documentation":"

Lists the configurations for DNS query logging that are associated with the current AWS account or the configuration that is associated with a specified hosted zone.

For more information about DNS query logs, see CreateQueryLoggingConfig. Additional information, including the format of DNS query logs, appears in Logging DNS Queries in the Amazon Route 53 Developer Guide.

" + }, "ListResourceRecordSets":{ "name":"ListResourceRecordSets", "http":{ @@ -823,6 +932,34 @@ } }, "shapes":{ + "AccountLimit":{ + "type":"structure", + "required":[ + "Type", + "Value" + ], + "members":{ + "Type":{ + "shape":"AccountLimitType", + "documentation":"

The limit that you requested. Valid values include the following:

" + }, + "Value":{ + "shape":"LimitValue", + "documentation":"

The current value for the limit that is specified by AccountLimit$Type.

" + } + }, + "documentation":"

A complex type that contains the type of limit that you specified in the request and the current value for that limit.

" + }, + "AccountLimitType":{ + "type":"string", + "enum":[ + "MAX_HEALTH_CHECKS_BY_OWNER", + "MAX_HOSTED_ZONES_BY_OWNER", + "MAX_TRAFFIC_POLICY_INSTANCES_BY_OWNER", + "MAX_REUSABLE_DELEGATION_SETS_BY_OWNER", + "MAX_TRAFFIC_POLICIES_BY_OWNER" + ] + }, "AlarmIdentifier":{ "type":"structure", "required":[ @@ -857,11 +994,11 @@ "members":{ "HostedZoneId":{ "shape":"ResourceId", - "documentation":"

Alias resource records sets only: The value used depends on where you want to route traffic:

CloudFront distribution

Specify Z2FDTNDATAQYW2.

Alias resource record sets for CloudFront can't be created in a private zone.

Elastic Beanstalk environment

Specify the hosted zone ID for the region in which you created the environment. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.

ELB load balancer

Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:

  • Elastic Load Balancing table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference: Use the value in the \"Amazon Route 53 Hosted Zone ID\" column that corresponds with the region that you created your load balancer in.

  • AWS Management Console: Go to the Amazon EC2 page, click Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.

  • Elastic Load Balancing API: Use DescribeLoadBalancers to get the value of CanonicalHostedZoneNameId. For more information, see the applicable guide:

  • AWS CLI: Use describe-load-balancers to get the value of CanonicalHostedZoneNameID.

An Amazon S3 bucket configured as a static website

Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the Amazon Simple Storage Service Website Endpoints table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.

Another Amazon Route 53 resource record set in your hosted zone

Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)

" + "documentation":"

Alias resource records sets only: The value used depends on where you want to route traffic:

CloudFront distribution

Specify Z2FDTNDATAQYW2.

Alias resource record sets for CloudFront can't be created in a private zone.

Elastic Beanstalk environment

Specify the hosted zone ID for the region in which you created the environment. The environment must have a regionalized subdomain. For a list of regions and the corresponding hosted zone IDs, see AWS Elastic Beanstalk in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.

ELB load balancer

Specify the value of the hosted zone ID for the load balancer. Use the following methods to get the hosted zone ID:

  • Elastic Load Balancing table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference: Use the value that corresponds with the region that you created your load balancer in. Note that there are separate columns for Application and Classic Load Balancers and for Network Load Balancers.

  • AWS Management Console: Go to the Amazon EC2 page, choose Load Balancers in the navigation pane, select the load balancer, and get the value of the Hosted zone field on the Description tab.

  • Elastic Load Balancing API: Use DescribeLoadBalancers to get the applicable value. For more information, see the applicable guide:

  • AWS CLI: Use describe-load-balancers to get the applicable value. For more information, see the applicable guide:

An Amazon S3 bucket configured as a static website

Specify the hosted zone ID for the region that you created the bucket in. For more information about valid values, see the Amazon Simple Storage Service Website Endpoints table in the \"AWS Regions and Endpoints\" chapter of the Amazon Web Services General Reference.

Another Amazon Route 53 resource record set in your hosted zone

Specify the hosted zone ID of your hosted zone. (An alias resource record set can't reference a resource record set in a different hosted zone.)

" }, "DNSName":{ "shape":"DNSName", - "documentation":"

Alias resource record sets only: The value that you specify depends on where you want to route queries:

CloudFront distribution

Specify the domain name that CloudFront assigned when you created your distribution.

Your CloudFront distribution must include an alternate domain name that matches the name of the resource record set. For example, if the name of the resource record set is acme.example.com, your CloudFront distribution must include acme.example.com as one of the alternate domain names. For more information, see Using Alternate Domain Names (CNAMEs) in the Amazon CloudFront Developer Guide.

Elastic Beanstalk environment

Specify the CNAME attribute for the environment. (The environment must have a regionalized domain name.) You can use the following methods to get the value of the CNAME attribute:

  • AWS Management Console: For information about how to get the value by using the console, see Using Custom Domains with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide.

  • Elastic Beanstalk API: Use the DescribeEnvironments action to get the value of the CNAME attribute. For more information, see DescribeEnvironments in the AWS Elastic Beanstalk API Reference.

  • AWS CLI: Use the describe-environments command to get the value of the CNAME attribute. For more information, see describe-environments in the AWS Command Line Interface Reference.

ELB load balancer

Specify the DNS name that is associated with the load balancer. Get the DNS name by using the AWS Management Console, the ELB API, or the AWS CLI.

  • AWS Management Console: Go to the EC2 page, choose Load Balancers in the navigation pane, choose the load balancer, choose the Description tab, and get the value of the DNS name field. (If you're routing traffic to a Classic Load Balancer, get the value that begins with dualstack.)

  • Elastic Load Balancing API: Use DescribeLoadBalancers to get the value of DNSName. For more information, see the applicable guide:

  • AWS CLI: Use describe-load-balancers to get the value of DNSName.

Amazon S3 bucket that is configured as a static website

Specify the domain name of the Amazon S3 website endpoint in which you created the bucket, for example, s3-website-us-east-2.amazonaws.com. For more information about valid values, see the table Amazon Simple Storage Service (S3) Website Endpoints in the Amazon Web Services General Reference. For more information about using S3 buckets for websites, see Getting Started with Amazon Route 53 in the Amazon Route 53 Developer Guide.

Another Amazon Route 53 resource record set

Specify the value of the Name element for a resource record set in the current hosted zone.

" + "documentation":"

Alias resource record sets only: The value that you specify depends on where you want to route queries:

CloudFront distribution

Specify the domain name that CloudFront assigned when you created your distribution.

Your CloudFront distribution must include an alternate domain name that matches the name of the resource record set. For example, if the name of the resource record set is acme.example.com, your CloudFront distribution must include acme.example.com as one of the alternate domain names. For more information, see Using Alternate Domain Names (CNAMEs) in the Amazon CloudFront Developer Guide.

Elastic Beanstalk environment

Specify the CNAME attribute for the environment. (The environment must have a regionalized domain name.) You can use the following methods to get the value of the CNAME attribute:

  • AWS Management Console: For information about how to get the value by using the console, see Using Custom Domains with AWS Elastic Beanstalk in the AWS Elastic Beanstalk Developer Guide.

  • Elastic Beanstalk API: Use the DescribeEnvironments action to get the value of the CNAME attribute. For more information, see DescribeEnvironments in the AWS Elastic Beanstalk API Reference.

  • AWS CLI: Use the describe-environments command to get the value of the CNAME attribute. For more information, see describe-environments in the AWS Command Line Interface Reference.

ELB load balancer

Specify the DNS name that is associated with the load balancer. Get the DNS name by using the AWS Management Console, the ELB API, or the AWS CLI.

  • AWS Management Console: Go to the EC2 page, choose Load Balancers in the navigation pane, choose the load balancer, choose the Description tab, and get the value of the DNS name field. (If you're routing traffic to a Classic Load Balancer, get the value that begins with dualstack.)

  • Elastic Load Balancing API: Use DescribeLoadBalancers to get the value of DNSName. For more information, see the applicable guide:

  • AWS CLI: Use describe-load-balancers to get the value of DNSName. For more information, see the applicable guide:

Amazon S3 bucket that is configured as a static website

Specify the domain name of the Amazon S3 website endpoint in which you created the bucket, for example, s3-website-us-east-2.amazonaws.com. For more information about valid values, see the table Amazon Simple Storage Service (S3) Website Endpoints in the Amazon Web Services General Reference. For more information about using S3 buckets for websites, see Getting Started with Amazon Route 53 in the Amazon Route 53 Developer Guide.

Another Amazon Route 53 resource record set

Specify the value of the Name element for a resource record set in the current hosted zone.

" }, "EvaluateTargetHealth":{ "shape":"AliasHealthEnabled", @@ -915,7 +1052,7 @@ "members":{ "Action":{ "shape":"ChangeAction", - "documentation":"

The action to perform:

The values that you need to include in the request depend on the type of resource record set that you're creating, deleting, or updating:

Basic resource record sets (excluding alias, failover, geolocation, latency, and weighted resource record sets)

Failover, geolocation, latency, or weighted resource record sets (excluding alias resource record sets)

Alias resource record sets (including failover alias, geolocation alias, latency alias, and weighted alias resource record sets)

" + "documentation":"

The action to perform:

" }, "ResourceRecordSet":{ "shape":"ResourceRecordSet", @@ -1115,6 +1252,7 @@ }, "documentation":"

A complex type that contains information about the CloudWatch alarm that Amazon Route 53 is monitoring for this health check.

" }, + "CloudWatchLogsLogGroupArn":{"type":"string"}, "CloudWatchRegion":{ "type":"string", "enum":[ @@ -1153,7 +1291,7 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

Another user submitted a request to update the object at the same time that you did. Retry the request.

", + "documentation":"

Another user submitted a request to create, update, or delete the object at the same time that you did. Retry the request.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -1279,6 +1417,42 @@ }, "documentation":"

A complex type containing the response information for the hosted zone.

" }, + "CreateQueryLoggingConfigRequest":{ + "type":"structure", + "required":[ + "HostedZoneId", + "CloudWatchLogsLogGroupArn" + ], + "members":{ + "HostedZoneId":{ + "shape":"ResourceId", + "documentation":"

The ID of the hosted zone that you want to log queries for. You can log queries only for public hosted zones.

" + }, + "CloudWatchLogsLogGroupArn":{ + "shape":"CloudWatchLogsLogGroupArn", + "documentation":"

The Amazon Resource Name (ARN) for the log group that you want to Amazon Route 53 to send query logs to. This is the format of the ARN:

arn:aws:logs:region:account-id:log-group:log_group_name

To get the ARN for a log group, you can use the CloudWatch console, the DescribeLogGroups API action, the describe-log-groups command, or the applicable command in one of the AWS SDKs.

" + } + } + }, + "CreateQueryLoggingConfigResponse":{ + "type":"structure", + "required":[ + "QueryLoggingConfig", + "Location" + ], + "members":{ + "QueryLoggingConfig":{ + "shape":"QueryLoggingConfig", + "documentation":"

A complex type that contains the ID for a query logging configuration, the ID of the hosted zone that you want to log queries for, and the ARN for the log group that you want Amazon Route 53 to send query logs to.

" + }, + "Location":{ + "shape":"ResourceURI", + "documentation":"

The unique URL representing the new query logging configuration.

", + "location":"header", + "locationName":"Location" + } + } + }, "CreateReusableDelegationSetRequest":{ "type":"structure", "required":["CallerReference"], @@ -1626,6 +1800,23 @@ }, "documentation":"

A complex type that contains the response to a DeleteHostedZone request.

" }, + "DeleteQueryLoggingConfigRequest":{ + "type":"structure", + "required":["Id"], + "members":{ + "Id":{ + "shape":"QueryLoggingConfigId", + "documentation":"

The ID of the configuration that you want to delete.

", + "location":"uri", + "locationName":"Id" + } + } + }, + "DeleteQueryLoggingConfigResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteReusableDelegationSetRequest":{ "type":"structure", "required":["Id"], @@ -1892,6 +2083,37 @@ "max":64, "min":1 }, + "GetAccountLimitRequest":{ + "type":"structure", + "required":["Type"], + "members":{ + "Type":{ + "shape":"AccountLimitType", + "documentation":"

The limit that you want to get. Valid values include the following:

", + "location":"uri", + "locationName":"Type" + } + }, + "documentation":"

A complex type that contains information about the request to create a hosted zone.

" + }, + "GetAccountLimitResponse":{ + "type":"structure", + "required":[ + "Limit", + "Count" + ], + "members":{ + "Limit":{ + "shape":"AccountLimit", + "documentation":"

The current setting for the specified limit. For example, if you specified MAX_HEALTH_CHECKS_BY_OWNER for the value of Type in the request, the value of Limit is the maximum number of health checks that you can create using the current account.

" + }, + "Count":{ + "shape":"UsageCount", + "documentation":"

The current number of entities that you have created of the specified type. For example, if you specified MAX_HEALTH_CHECKS_BY_OWNER for the value of Type in the request, the value of Count is the current number of health checks that you have created using the current account.

" + } + }, + "documentation":"

A complex type that contains the requested limit.

" + }, "GetChangeRequest":{ "type":"structure", "required":["Id"], @@ -2069,6 +2291,46 @@ }, "documentation":"

A complex type that contains the response to a GetHostedZoneCount request.

" }, + "GetHostedZoneLimitRequest":{ + "type":"structure", + "required":[ + "Type", + "HostedZoneId" + ], + "members":{ + "Type":{ + "shape":"HostedZoneLimitType", + "documentation":"

The limit that you want to get. Valid values include the following:

", + "location":"uri", + "locationName":"Type" + }, + "HostedZoneId":{ + "shape":"ResourceId", + "documentation":"

The ID of the hosted zone that you want to get a limit for.

", + "location":"uri", + "locationName":"Id" + } + }, + "documentation":"

A complex type that contains information about the request to create a hosted zone.

" + }, + "GetHostedZoneLimitResponse":{ + "type":"structure", + "required":[ + "Limit", + "Count" + ], + "members":{ + "Limit":{ + "shape":"HostedZoneLimit", + "documentation":"

The current setting for the specified limit. For example, if you specified MAX_RRSETS_BY_ZONE for the value of Type in the request, the value of Limit is the maximum number of records that you can create in the specified hosted zone.

" + }, + "Count":{ + "shape":"UsageCount", + "documentation":"

The current number of entities that you have created of the specified type. For example, if you specified MAX_RRSETS_BY_ZONE for the value of Type in the request, the value of Count is the current number of records that you have created in the specified hosted zone.

" + } + }, + "documentation":"

A complex type that contains the requested limit.

" + }, "GetHostedZoneRequest":{ "type":"structure", "required":["Id"], @@ -2101,6 +2363,68 @@ }, "documentation":"

A complex type that contain the response to a GetHostedZone request.

" }, + "GetQueryLoggingConfigRequest":{ + "type":"structure", + "required":["Id"], + "members":{ + "Id":{ + "shape":"QueryLoggingConfigId", + "documentation":"

The ID of the configuration for DNS query logging that you want to get information about.

", + "location":"uri", + "locationName":"Id" + } + } + }, + "GetQueryLoggingConfigResponse":{ + "type":"structure", + "required":["QueryLoggingConfig"], + "members":{ + "QueryLoggingConfig":{ + "shape":"QueryLoggingConfig", + "documentation":"

A complex type that contains information about the query logging configuration that you specified in a GetQueryLoggingConfig request.

" + } + } + }, + "GetReusableDelegationSetLimitRequest":{ + "type":"structure", + "required":[ + "Type", + "DelegationSetId" + ], + "members":{ + "Type":{ + "shape":"ReusableDelegationSetLimitType", + "documentation":"

Specify MAX_ZONES_BY_REUSABLE_DELEGATION_SET to get the maximum number of hosted zones that you can associate with the specified reusable delegation set.

", + "location":"uri", + "locationName":"Type" + }, + "DelegationSetId":{ + "shape":"ResourceId", + "documentation":"

The ID of the delegation set that you want to get the limit for.

", + "location":"uri", + "locationName":"Id" + } + }, + "documentation":"

A complex type that contains information about the request to create a hosted zone.

" + }, + "GetReusableDelegationSetLimitResponse":{ + "type":"structure", + "required":[ + "Limit", + "Count" + ], + "members":{ + "Limit":{ + "shape":"ReusableDelegationSetLimit", + "documentation":"

The current setting for the limit on hosted zones that you can associate with the specified reusable delegation set.

" + }, + "Count":{ + "shape":"UsageCount", + "documentation":"

The current number of hosted zones that you can associate with the specified reusable delegation set.

" + } + }, + "documentation":"

A complex type that contains the requested limit.

" + }, "GetReusableDelegationSetRequest":{ "type":"structure", "required":["Id"], @@ -2216,6 +2540,10 @@ "shape":"HealthCheckNonce", "documentation":"

A unique string that you specified when you created the health check.

" }, + "LinkedService":{ + "shape":"LinkedService", + "documentation":"

If the health check was created by another service, the service that created the health check. When a health check is created by another service, you can't edit or delete it using Amazon Route 53.

" + }, "HealthCheckConfig":{ "shape":"HealthCheckConfig", "documentation":"

A complex type that contains detailed information about one health check.

" @@ -2328,6 +2656,7 @@ } }, "documentation":"

This error code is not in use.

", + "deprecated":true, "error":{"httpStatusCode":400}, "exception":true }, @@ -2449,6 +2778,10 @@ "ResourceRecordSetCount":{ "shape":"HostedZoneRRSetCount", "documentation":"

The number of resource record sets in the hosted zone.

" + }, + "LinkedService":{ + "shape":"LinkedService", + "documentation":"

If the hosted zone was created by another service, the service that created the hosted zone. When a hosted zone is created by another service, you can't edit or delete it using Amazon Route 53.

" } }, "documentation":"

A complex type that contains general information about the hosted zone.

" @@ -2480,6 +2813,31 @@ "documentation":"

A complex type that contains an optional comment about your hosted zone. If you don't want to specify a comment, omit both the HostedZoneConfig and Comment elements.

" }, "HostedZoneCount":{"type":"long"}, + "HostedZoneLimit":{ + "type":"structure", + "required":[ + "Type", + "Value" + ], + "members":{ + "Type":{ + "shape":"HostedZoneLimitType", + "documentation":"

The limit that you requested. Valid values include the following:

" + }, + "Value":{ + "shape":"LimitValue", + "documentation":"

The current value for the limit that is specified by Type.

" + } + }, + "documentation":"

A complex type that contains the type of limit that you specified in the request and the current value for that limit.

" + }, + "HostedZoneLimitType":{ + "type":"string", + "enum":[ + "MAX_RRSETS_BY_ZONE", + "MAX_VPCS_ASSOCIATED_BY_ZONE" + ] + }, "HostedZoneNotEmpty":{ "type":"structure", "members":{ @@ -2503,6 +2861,17 @@ "documentation":"

The specified HostedZone can't be found.

", "exception":true }, + "HostedZoneNotPrivate":{ + "type":"structure", + "members":{ + "message":{ + "shape":"ErrorMessage", + "documentation":"

Descriptive message for the error response.

" + } + }, + "documentation":"

The specified hosted zone is a public hosted zone, not a private hosted zone.

", + "exception":true + }, "HostedZoneRRSetCount":{"type":"long"}, "HostedZones":{ "type":"list", @@ -2526,6 +2895,15 @@ "error":{"httpStatusCode":400}, "exception":true }, + "InsufficientCloudWatchLogsResourcePolicy":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

Amazon Route 53 doesn't have the permissions required to create log streams and send query logs to log streams. Possible causes include the following:

", + "error":{"httpStatusCode":400}, + "exception":true + }, "InsufficientDataHealthStatus":{ "type":"string", "enum":[ @@ -2551,7 +2929,8 @@ "messages":{ "shape":"ErrorMessages", "documentation":"

Descriptive message for the error response.

" - } + }, + "message":{"shape":"ErrorMessage"} }, "documentation":"

This exception contains a list of messages that might contain one or more error messages. Each error message indicates one error in the change batch.

", "exception":true @@ -2585,6 +2964,7 @@ "members":{ "message":{"shape":"ErrorMessage"} }, + "documentation":"

The value that you specified to get the second or subsequent page of results is invalid.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -2626,6 +3006,10 @@ "error":{"httpStatusCode":400}, "exception":true }, + "LimitValue":{ + "type":"long", + "min":1 + }, "LimitsExceeded":{ "type":"structure", "members":{ @@ -2634,9 +3018,23 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

The limits specified for a resource have been exceeded.

", + "documentation":"

This operation can't be completed either because the current account has reached the limit on reusable delegation sets that it can create or because you've reached the limit on the number of Amazon VPCs that you can associate with a private hosted zone. To get the current limit on the number of reusable delegation sets, see GetAccountLimit. To get the current limit on the number of Amazon VPCs that you can associate with a private hosted zone, see GetHostedZoneLimit. To request a higher limit, create a case with the AWS Support Center.

", "exception":true }, + "LinkedService":{ + "type":"structure", + "members":{ + "ServicePrincipal":{ + "shape":"ServicePrincipal", + "documentation":"

If the health check or hosted zone was created by another service, the service that created the resource. When a resource is created by another service, you can't edit or delete it using Amazon Route 53.

" + }, + "Description":{ + "shape":"ResourceDescription", + "documentation":"

If the health check or hosted zone was created by another service, an optional description that can be provided by the other service. When a resource is created by another service, you can't edit or delete it using Amazon Route 53.

" + } + }, + "documentation":"

If a health check or hosted zone was created by another service, LinkedService is a complex type that describes the service that created the resource. When a resource is created by another service, you can't edit or delete it using Amazon Route 53.

" + }, "ListGeoLocationsRequest":{ "type":"structure", "members":{ @@ -2870,6 +3268,43 @@ } } }, + "ListQueryLoggingConfigsRequest":{ + "type":"structure", + "members":{ + "HostedZoneId":{ + "shape":"ResourceId", + "documentation":"

(Optional) If you want to list the query logging configuration that is associated with a hosted zone, specify the ID in HostedZoneId.

If you don't specify a hosted zone ID, ListQueryLoggingConfigs returns all of the configurations that are associated with the current AWS account.

", + "location":"querystring", + "locationName":"hostedzoneid" + }, + "NextToken":{ + "shape":"PaginationToken", + "documentation":"

(Optional) If the current AWS account has more than MaxResults query logging configurations, use NextToken to get the second and subsequent pages of results.

For the first ListQueryLoggingConfigs request, omit this value.

For the second and subsequent requests, get the value of NextToken from the previous response and specify that value for NextToken in the request.

", + "location":"querystring", + "locationName":"nexttoken" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

(Optional) The maximum number of query logging configurations that you want Amazon Route 53 to return in response to the current request. If the current AWS account has more than MaxResults configurations, use the value of ListQueryLoggingConfigsResponse$NextToken in the response to get the next page of results.

If you don't specify a value for MaxResults, Amazon Route 53 returns up to 100 configurations.

", + "location":"querystring", + "locationName":"maxresults" + } + } + }, + "ListQueryLoggingConfigsResponse":{ + "type":"structure", + "required":["QueryLoggingConfigs"], + "members":{ + "QueryLoggingConfigs":{ + "shape":"QueryLoggingConfigs", + "documentation":"

An array that contains one QueryLoggingConfig element for each configuration for DNS query logging that is associated with the current AWS account.

" + }, + "NextToken":{ + "shape":"PaginationToken", + "documentation":"

If a response includes the last of the query logging configurations that are associated with the current AWS account, NextToken doesn't appear in the response.

If a response doesn't include the last of the configurations, you can get more configurations by submitting another ListQueryLoggingConfigs request. Get the value of NextToken that Amazon Route 53 returned in the previous response and include it in NextToken in the next request.

" + } + } + }, "ListResourceRecordSetsRequest":{ "type":"structure", "required":["HostedZoneId"], @@ -2888,7 +3323,7 @@ }, "StartRecordType":{ "shape":"RRType", - "documentation":"

The type of resource record set to begin the record listing from.

Valid values for basic resource record sets: A | AAAA | CNAME | MX | NAPTR | NS | PTR | SOA | SPF | SRV | TXT

Values for weighted, latency, geo, and failover resource record sets: A | AAAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT

Values for alias resource record sets:

Constraint: Specifying type without specifying name returns an InvalidInput error.

", + "documentation":"

The type of resource record set to begin the record listing from.

Valid values for basic resource record sets: A | AAAA | CAA | CNAME | MX | NAPTR | NS | PTR | SOA | SPF | SRV | TXT

Values for weighted, latency, geo, and failover resource record sets: A | AAAA | CAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT

Values for alias resource record sets:

Constraint: Specifying type without specifying name returns an InvalidInput error.

", "location":"querystring", "locationName":"type" }, @@ -3440,6 +3875,15 @@ "error":{"httpStatusCode":404}, "exception":true }, + "NoSuchCloudWatchLogsLogGroup":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

There is no CloudWatch Logs log group with the specified ARN.

", + "error":{"httpStatusCode":404}, + "exception":true + }, "NoSuchDelegationSet":{ "type":"structure", "members":{ @@ -3487,6 +3931,15 @@ "error":{"httpStatusCode":404}, "exception":true }, + "NoSuchQueryLoggingConfig":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

There is no DNS query logging configuration with the specified ID.

", + "error":{"httpStatusCode":404}, + "exception":true + }, "NoSuchTrafficPolicy":{ "type":"structure", "members":{ @@ -3568,6 +4021,50 @@ "error":{"httpStatusCode":400}, "exception":true }, + "QueryLoggingConfig":{ + "type":"structure", + "required":[ + "Id", + "HostedZoneId", + "CloudWatchLogsLogGroupArn" + ], + "members":{ + "Id":{ + "shape":"QueryLoggingConfigId", + "documentation":"

The ID for a configuration for DNS query logging.

" + }, + "HostedZoneId":{ + "shape":"ResourceId", + "documentation":"

The ID of the hosted zone that CloudWatch Logs is logging queries for.

" + }, + "CloudWatchLogsLogGroupArn":{ + "shape":"CloudWatchLogsLogGroupArn", + "documentation":"

The Amazon Resource Name (ARN) of the CloudWatch Logs log group that Amazon Route 53 is publishing logs to.

" + } + }, + "documentation":"

A complex type that contains information about a configuration for DNS query logging.

" + }, + "QueryLoggingConfigAlreadyExists":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

You can create only one query logging configuration for a hosted zone, and a query logging configuration already exists for this hosted zone.

", + "error":{"httpStatusCode":409}, + "exception":true + }, + "QueryLoggingConfigId":{ + "type":"string", + "max":36, + "min":1 + }, + "QueryLoggingConfigs":{ + "type":"list", + "member":{ + "shape":"QueryLoggingConfig", + "locationName":"QueryLoggingConfig" + } + }, "RData":{ "type":"string", "max":4000 @@ -3585,7 +4082,8 @@ "PTR", "SRV", "SPF", - "AAAA" + "AAAA", + "CAA" ] }, "RecordData":{ @@ -3606,6 +4104,25 @@ "max":30, "min":10 }, + "ResettableElementName":{ + "type":"string", + "enum":[ + "FullyQualifiedDomainName", + "Regions", + "ResourcePath", + "ChildHealthChecks" + ], + "max":64, + "min":1 + }, + "ResettableElementNameList":{ + "type":"list", + "member":{ + "shape":"ResettableElementName", + "locationName":"ResettableElementName" + }, + "max":64 + }, "ResourceDescription":{ "type":"string", "max":256 @@ -3642,7 +4159,7 @@ }, "Type":{ "shape":"RRType", - "documentation":"

The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.

Valid values for basic resource record sets: A | AAAA | CNAME | MX | NAPTR | NS | PTR | SOA | SPF | SRV | TXT

Values for weighted, latency, geolocation, and failover resource record sets: A | AAAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.

Valid values for multivalue answer resource record sets: A | AAAA | MX | NAPTR | PTR | SPF | SRV | TXT

SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type is SPF. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.

Values for alias resource record sets:

" + "documentation":"

The DNS record type. For information about different record types and how data is encoded for them, see Supported DNS Resource Record Types in the Amazon Route 53 Developer Guide.

Valid values for basic resource record sets: A | AAAA | CAA | CNAME | MX | NAPTR | NS | PTR | SOA | SPF | SRV | TXT

Values for weighted, latency, geolocation, and failover resource record sets: A | AAAA | CAA | CNAME | MX | NAPTR | PTR | SPF | SRV | TXT. When creating a group of weighted, latency, geolocation, or failover resource record sets, specify the same value for all of the resource record sets in the group.

Valid values for multivalue answer resource record sets: A | AAAA | MX | NAPTR | PTR | SPF | SRV | TXT

SPF records were formerly used to verify the identity of the sender of email messages. However, we no longer recommend that you create resource record sets for which the value of Type is SPF. RFC 7208, Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1, has been updated to say, \"...[I]ts existence and mechanism defined in [RFC4408] have led to some interoperability issues. Accordingly, its use is no longer appropriate for SPF version 1; implementations are not to use it.\" In RFC 7208, see section 14.1, The SPF DNS Record Type.

Values for alias resource record sets:

" }, "SetIdentifier":{ "shape":"ResourceRecordSetIdentifier", @@ -3775,10 +4292,36 @@ "type":"string", "max":1024 }, + "ReusableDelegationSetLimit":{ + "type":"structure", + "required":[ + "Type", + "Value" + ], + "members":{ + "Type":{ + "shape":"ReusableDelegationSetLimitType", + "documentation":"

The limit that you requested: MAX_ZONES_BY_REUSABLE_DELEGATION_SET, the maximum number of hosted zones that you can associate with the specified reusable delegation set.

" + }, + "Value":{ + "shape":"LimitValue", + "documentation":"

The current value for the MAX_ZONES_BY_REUSABLE_DELEGATION_SET limit.

" + } + }, + "documentation":"

A complex type that contains the type of limit that you specified in the request and the current value for that limit.

" + }, + "ReusableDelegationSetLimitType":{ + "type":"string", + "enum":["MAX_ZONES_BY_REUSABLE_DELEGATION_SET"] + }, "SearchString":{ "type":"string", "max":255 }, + "ServicePrincipal":{ + "type":"string", + "max":128 + }, "Statistic":{ "type":"string", "enum":[ @@ -3975,7 +4518,7 @@ "members":{ "message":{"shape":"ErrorMessage"} }, - "documentation":"

You have reached the maximum number of active health checks for an AWS account. The default limit is 100. To request a higher limit, create a case with the AWS Support Center.

", + "documentation":"

This health check can't be created because the current account has reached the limit on the number of active health checks.

For information about default limits, see Limits in the Amazon Route 53 Developer Guide.

For information about how to get the current limit for an account, see GetAccountLimit. To request a higher limit, create a case with the AWS Support Center.

You have reached the maximum number of active health checks for an AWS account. To request a higher limit, create a case with the AWS Support Center.

", "exception":true }, "TooManyHostedZones":{ @@ -3986,7 +4529,7 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

This hosted zone can't be created because the hosted zone limit is exceeded. To request a limit increase, go to the Amazon Route 53 Contact Us page.

", + "documentation":"

This operation can't be completed either because the current account has reached the limit on the number of hosted zones or because you've reached the limit on the number of hosted zones that can be associated with a reusable delegation set.

For information about default limits, see Limits in the Amazon Route 53 Developer Guide.

To get the current limit on hosted zones that can be created by an account, see GetAccountLimit.

To get the current limit on hosted zones that can be associated with a reusable delegation set, see GetReusableDelegationSetLimit.

To request a higher limit, create a case with the AWS Support Center.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -3998,7 +4541,7 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

You've created the maximum number of traffic policies that can be created for the current AWS account. You can request an increase to the limit on the Contact Us page.

", + "documentation":"

This traffic policy can't be created because the current account has reached the limit on the number of traffic policies.

For information about default limits, see Limits in the Amazon Route 53 Developer Guide.

To get the current limit for an account, see GetAccountLimit.

To request a higher limit, create a case with the AWS Support Center.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -4010,7 +4553,7 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

You've created the maximum number of traffic policy instances that can be created for the current AWS account. You can request an increase to the limit on the Contact Us page.

", + "documentation":"

This traffic policy instance can't be created because the current account has reached the limit on the number of traffic policy instances.

For information about default limits, see Limits in the Amazon Route 53 Developer Guide.

For information about how to get the current limit for an account, see GetAccountLimit.

To request a higher limit, create a case with the AWS Support Center.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -4168,7 +4711,7 @@ "documentation":"

Descriptive message for the error response.

" } }, - "documentation":"

Traffic policy instance with given Id already exists.

", + "documentation":"

There is already a traffic policy instance with the specified ID.

", "error":{"httpStatusCode":409}, "exception":true }, @@ -4302,6 +4845,10 @@ "InsufficientDataHealthStatus":{ "shape":"InsufficientDataHealthStatus", "documentation":"

When CloudWatch has insufficient data about the metric to determine the alarm state, the status that you want Amazon Route 53 to assign to the health check:

" + }, + "ResetElements":{ + "shape":"ResettableElementNameList", + "documentation":"

A complex type that contains one ResettableElementName element for each element that you want to reset to the default value. Valid values for ResettableElementName include the following:

" } }, "documentation":"

A complex type that contains information about a request to update a health check.

" @@ -4417,6 +4964,10 @@ }, "documentation":"

A complex type that contains information about the resource record sets that Amazon Route 53 created based on a specified traffic policy.

" }, + "UsageCount":{ + "type":"long", + "min":0 + }, "VPC":{ "type":"structure", "members":{ diff --git a/services/route53/src/main/resources/codegen-resources/route53domains/service-2.json b/services/route53/src/main/resources/codegen-resources/route53domains/service-2.json index fcd96e759e19..259f4e7216c8 100644 --- a/services/route53/src/main/resources/codegen-resources/route53domains/service-2.json +++ b/services/route53/src/main/resources/codegen-resources/route53domains/service-2.json @@ -25,6 +25,20 @@ ], "documentation":"

This operation checks the availability of one domain name. Note that if the availability status of a domain is pending, you must submit another request to determine the availability of the domain name.

" }, + "CheckDomainTransferability":{ + "name":"CheckDomainTransferability", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CheckDomainTransferabilityRequest"}, + "output":{"shape":"CheckDomainTransferabilityResponse"}, + "errors":[ + {"shape":"InvalidInput"}, + {"shape":"UnsupportedTLD"} + ], + "documentation":"

Checks whether a domain name can be transferred to Amazon Route 53.

" + }, "DeleteTagsForDomain":{ "name":"DeleteTagsForDomain", "http":{ @@ -38,7 +52,7 @@ {"shape":"OperationLimitExceeded"}, {"shape":"UnsupportedTLD"} ], - "documentation":"

This operation deletes the specified tags for a domain.

All tag operations are eventually consistent; subsequent operations may not immediately represent all issued operations.

" + "documentation":"

This operation deletes the specified tags for a domain.

All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.

" }, "DisableDomainAutoRenew":{ "name":"DisableDomainAutoRenew", @@ -198,7 +212,7 @@ {"shape":"OperationLimitExceeded"}, {"shape":"UnsupportedTLD"} ], - "documentation":"

This operation returns all of the tags that are associated with the specified domain.

All tag operations are eventually consistent; subsequent operations may not immediately represent all issued operations.

" + "documentation":"

This operation returns all of the tags that are associated with the specified domain.

All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.

" }, "RegisterDomain":{ "name":"RegisterDomain", @@ -346,7 +360,7 @@ {"shape":"OperationLimitExceeded"}, {"shape":"UnsupportedTLD"} ], - "documentation":"

This operation adds or updates tags for a specified domain.

All tag operations are eventually consistent; subsequent operations may not immediately represent all issued operations.

" + "documentation":"

This operation adds or updates tags for a specified domain.

All tag operations are eventually consistent; subsequent operations might not immediately represent all issued operations.

" }, "ViewBilling":{ "name":"ViewBilling", @@ -419,11 +433,37 @@ "members":{ "Availability":{ "shape":"DomainAvailability", - "documentation":"

Whether the domain name is available for registering.

You can only register domains designated as AVAILABLE.

Valid values:

AVAILABLE

The domain name is available.

AVAILABLE_RESERVED

The domain name is reserved under specific conditions.

AVAILABLE_PREORDER

The domain name is available and can be preordered.

DONT_KNOW

The TLD registry didn't reply with a definitive answer about whether the domain name is available. Amazon Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.

PENDING

The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.

RESERVED

The domain name has been reserved for another person or organization.

UNAVAILABLE

The domain name is not available.

UNAVAILABLE_PREMIUM

The domain name is not available.

UNAVAILABLE_RESTRICTED

The domain name is forbidden.

" + "documentation":"

Whether the domain name is available for registering.

You can register only domains designated as AVAILABLE.

Valid values:

AVAILABLE

The domain name is available.

AVAILABLE_RESERVED

The domain name is reserved under specific conditions.

AVAILABLE_PREORDER

The domain name is available and can be preordered.

DONT_KNOW

The TLD registry didn't reply with a definitive answer about whether the domain name is available. Amazon Route 53 can return this response for a variety of reasons, for example, the registry is performing maintenance. Try again later.

PENDING

The TLD registry didn't return a response in the expected amount of time. When the response is delayed, it usually takes just a few extra seconds. You can resubmit the request immediately.

RESERVED

The domain name has been reserved for another person or organization.

UNAVAILABLE

The domain name is not available.

UNAVAILABLE_PREMIUM

The domain name is not available.

UNAVAILABLE_RESTRICTED

The domain name is forbidden.

" } }, "documentation":"

The CheckDomainAvailability response includes the following elements.

" }, + "CheckDomainTransferabilityRequest":{ + "type":"structure", + "required":["DomainName"], + "members":{ + "DomainName":{ + "shape":"DomainName", + "documentation":"

The name of the domain that you want to transfer to Amazon Route 53.

Constraints: The domain name can contain only the letters a through z, the numbers 0 through 9, and hyphen (-). Internationalized Domain Names are not supported.

" + }, + "AuthCode":{ + "shape":"DomainAuthCode", + "documentation":"

If the registrar for the top-level domain (TLD) requires an authorization code to transfer the domain, the code that you got from the current registrar for the domain.

" + } + }, + "documentation":"

The CheckDomainTransferability request contains the following elements.

" + }, + "CheckDomainTransferabilityResponse":{ + "type":"structure", + "required":["Transferability"], + "members":{ + "Transferability":{ + "shape":"DomainTransferability", + "documentation":"

A complex type that contains information about whether the specified domain can be transferred to Amazon Route 53.

" + } + }, + "documentation":"

The CheckDomainTransferability response includes the following elements.

" + }, "City":{ "type":"string", "max":255 @@ -836,8 +876,7 @@ }, "DomainName":{ "type":"string", - "max":255, - "pattern":"[a-zA-Z0-9_\\-.]*" + "max":255 }, "DomainStatus":{"type":"string"}, "DomainStatusList":{ @@ -889,6 +928,12 @@ "type":"list", "member":{"shape":"DomainSummary"} }, + "DomainTransferability":{ + "type":"structure", + "members":{ + "Transferable":{"shape":"Transferable"} + } + }, "DuplicateRequest":{ "type":"structure", "members":{ @@ -988,11 +1033,16 @@ "ES_LEGAL_FORM", "FI_BUSINESS_NUMBER", "FI_ID_NUMBER", + "FI_NATIONALITY", + "FI_ORGANIZATION_TYPE", "IT_PIN", + "IT_REGISTRANT_ENTITY_TYPE", "RU_PASSPORT_DATA", "SE_ID_NUMBER", "SG_ID_NUMBER", - "VAT_NUMBER" + "VAT_NUMBER", + "UK_CONTACT_TYPE", + "UK_COMPANY_NUMBER" ] }, "ExtraParamValue":{ @@ -1221,10 +1271,10 @@ "members":{ "message":{ "shape":"ErrorMessage", - "documentation":"

The requested item is not acceptable. For example, for an OperationId it may refer to the ID of an operation that is already completed. For a domain name, it may not be a valid domain name or belong to the requester account.

" + "documentation":"

The requested item is not acceptable. For example, for an OperationId it might refer to the ID of an operation that is already completed. For a domain name, it might not be a valid domain name or belong to the requester account.

" } }, - "documentation":"

The requested item is not acceptable. For example, for an OperationId it may refer to the ID of an operation that is already completed. For a domain name, it may not be a valid domain name or belong to the requester account.

", + "documentation":"

The requested item is not acceptable. For example, for an OperationId it might refer to the ID of an operation that is already completed. For a domain name, it might not be a valid domain name or belong to the requester account.

", "exception":true }, "InvoiceId":{"type":"string"}, @@ -1397,7 +1447,16 @@ "UPDATE_DOMAIN_CONTACT", "UPDATE_NAMESERVER", "CHANGE_PRIVACY_PROTECTION", - "DOMAIN_LOCK" + "DOMAIN_LOCK", + "ENABLE_AUTORENEW", + "DISABLE_AUTORENEW", + "ADD_DNSSEC", + "REMOVE_DNSSEC", + "EXPIRE_DOMAIN", + "TRANSFER_OUT_DOMAIN", + "CHANGE_DOMAIN_OWNER", + "RENEW_DOMAIN", + "PUSH_DOMAIN" ] }, "PageMarker":{ @@ -1679,15 +1738,24 @@ }, "documentation":"

The TranserDomain response includes the following element.

" }, + "Transferable":{ + "type":"string", + "documentation":"

Whether the domain name can be transferred to Amazon Route 53.

You can transfer only domains that have a value of TRANSFERABLE for Transferable.

Valid values:

TRANSFERABLE

The domain name can be transferred to Amazon Route 53.

UNTRANSFERRABLE

The domain name can't be transferred to Amazon Route 53.

DONT_KNOW

Reserved for future use.

", + "enum":[ + "TRANSFERABLE", + "UNTRANSFERABLE", + "DONT_KNOW" + ] + }, "UnsupportedTLD":{ "type":"structure", "members":{ "message":{ "shape":"ErrorMessage", - "documentation":"

Amazon Route 53 does not support this top-level domain.

" + "documentation":"

Amazon Route 53 does not support this top-level domain (TLD).

" } }, - "documentation":"

Amazon Route 53 does not support this top-level domain.

", + "documentation":"

Amazon Route 53 does not support this top-level domain (TLD).

", "exception":true }, "UpdateDomainContactPrivacyRequest":{ @@ -1771,7 +1839,8 @@ }, "FIAuthKey":{ "shape":"FIAuthKey", - "documentation":"

The authorization key for .fi domains

" + "documentation":"

The authorization key for .fi domains

", + "deprecated":true }, "Nameservers":{ "shape":"NameserverList", diff --git a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json index 63b883381c31..5fc0c1f3bada 100644 --- a/services/servicecatalog/src/main/resources/codegen-resources/service-2.json +++ b/services/servicecatalog/src/main/resources/codegen-resources/service-2.json @@ -56,6 +56,38 @@ ], "documentation":"

Associates a product with a portfolio.

" }, + "AssociateTagOptionWithResource":{ + "name":"AssociateTagOptionWithResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"AssociateTagOptionWithResourceInput"}, + "output":{"shape":"AssociateTagOptionWithResourceOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParametersException"}, + {"shape":"LimitExceededException"}, + {"shape":"DuplicateResourceException"}, + {"shape":"InvalidStateException"} + ], + "documentation":"

Associate a TagOption identifier with a resource identifier.

" + }, + "CopyProduct":{ + "name":"CopyProduct", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CopyProductInput"}, + "output":{"shape":"CopyProductOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParametersException"} + ], + "documentation":"

Copies the specified source product to the specified target product or a new product.

You can copy the product to the same account or another account. You can copy the product to the same region or another region.

This operation is performed asynchronously. To track the progress of the operation, use DescribeCopyProductStatus.

" + }, "CreateConstraint":{ "name":"CreateConstraint", "http":{ @@ -82,7 +114,8 @@ "output":{"shape":"CreatePortfolioOutput"}, "errors":[ {"shape":"InvalidParametersException"}, - {"shape":"LimitExceededException"} + {"shape":"LimitExceededException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Creates a new portfolio.

" }, @@ -111,7 +144,8 @@ "output":{"shape":"CreateProductOutput"}, "errors":[ {"shape":"InvalidParametersException"}, - {"shape":"LimitExceededException"} + {"shape":"LimitExceededException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Creates a new product.

" }, @@ -128,7 +162,22 @@ {"shape":"InvalidParametersException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Create a new provisioning artifact for the specified product. This operation does not work with a product that has been shared with you.

See the bottom of this topic for an example JSON request.

" + "documentation":"

Create a new provisioning artifact for the specified product. This operation does not work with a product that has been shared with you.

" + }, + "CreateTagOption":{ + "name":"CreateTagOption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateTagOptionInput"}, + "output":{"shape":"CreateTagOptionOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"DuplicateResourceException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Create a new TagOption.

" }, "DeleteConstraint":{ "name":"DeleteConstraint", @@ -155,7 +204,8 @@ "errors":[ {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParametersException"}, - {"shape":"ResourceInUseException"} + {"shape":"ResourceInUseException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Deletes the specified portfolio. This operation does not work with a portfolio that has been shared with you or if it has products, users, constraints, or shared accounts associated with it.

" }, @@ -183,7 +233,8 @@ "errors":[ {"shape":"ResourceNotFoundException"}, {"shape":"ResourceInUseException"}, - {"shape":"InvalidParametersException"} + {"shape":"InvalidParametersException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Deletes the specified product. This operation does not work with a product that has been shared with you or is associated with a portfolio.

" }, @@ -215,6 +266,19 @@ ], "documentation":"

Retrieves detailed information for a specified constraint.

" }, + "DescribeCopyProductStatus":{ + "name":"DescribeCopyProductStatus", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeCopyProductStatusInput"}, + "output":{"shape":"DescribeCopyProductStatusOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Describes the status of the specified copy product operation.

" + }, "DescribePortfolio":{ "name":"DescribePortfolio", "http":{ @@ -307,7 +371,7 @@ {"shape":"InvalidParametersException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"

Provides information about parameters required to provision a specified product in a specified manner. Use this operation to obtain the list of ProvisioningArtifactParameters parameters available to call the ProvisionProduct operation for the specified product.

" + "documentation":"

Provides information about parameters required to provision a specified product in a specified manner. Use this operation to obtain the list of ProvisioningArtifactParameters parameters available to call the ProvisionProduct operation for the specified product.

If the output contains a TagOption key with an empty list of values, there is a TagOption conflict for that key. The end user cannot take action to fix the conflict, and launch is not blocked. In subsequent calls to the ProvisionProduct operation, do not include conflicted TagOption keys as tags. Calls to ProvisionProduct with empty TagOption values cause the error \"Parameter validation failed: Missing required parameter in Tags[N]:Value \". Calls to ProvisionProduct with conflicted TagOption keys automatically tag the provisioned product with the conflicted keys with the value \"sc-tagoption-conflict-portfolioId-productId\".

" }, "DescribeRecord":{ "name":"DescribeRecord", @@ -322,6 +386,20 @@ ], "documentation":"

Retrieves a paginated list of the full details of a specific request. Use this operation after calling a request operation (ProvisionProduct, TerminateProvisionedProduct, or UpdateProvisionedProduct).

" }, + "DescribeTagOption":{ + "name":"DescribeTagOption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeTagOptionInput"}, + "output":{"shape":"DescribeTagOptionOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Describes a TagOption.

" + }, "DisassociatePrincipalFromPortfolio":{ "name":"DisassociatePrincipalFromPortfolio", "http":{ @@ -346,10 +424,25 @@ "output":{"shape":"DisassociateProductFromPortfolioOutput"}, "errors":[ {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceInUseException"}, {"shape":"InvalidParametersException"} ], "documentation":"

Disassociates the specified product from the specified portfolio.

" }, + "DisassociateTagOptionFromResource":{ + "name":"DisassociateTagOptionFromResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DisassociateTagOptionFromResourceInput"}, + "output":{"shape":"DisassociateTagOptionFromResourceOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"

Disassociates a TagOption from a resource.

" + }, "ListAcceptedPortfolioShares":{ "name":"ListAcceptedPortfolioShares", "http":{ @@ -472,6 +565,35 @@ ], "documentation":"

Returns a paginated list of all performed requests, in the form of RecordDetails objects that are filtered as specified.

" }, + "ListResourcesForTagOption":{ + "name":"ListResourcesForTagOption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListResourcesForTagOptionInput"}, + "output":{"shape":"ListResourcesForTagOptionOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParametersException"} + ], + "documentation":"

Lists resources associated with a TagOption.

" + }, + "ListTagOptions":{ + "name":"ListTagOptions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagOptionsInput"}, + "output":{"shape":"ListTagOptionsOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"InvalidParametersException"} + ], + "documentation":"

Lists detailed TagOptions information.

" + }, "ProvisionProduct":{ "name":"ProvisionProduct", "http":{ @@ -485,7 +607,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"DuplicateResourceException"} ], - "documentation":"

Requests a provision of a specified product. A provisioned product is a resourced instance for a product. For example, provisioning a CloudFormation-template-backed product results in launching a CloudFormation stack and all the underlying resources that come with it.

You can check the status of this request using the DescribeRecord operation.

" + "documentation":"

Requests a provision of a specified product. A provisioned product is a resourced instance for a product. For example, provisioning a CloudFormation-template-backed product results in launching a CloudFormation stack and all the underlying resources that come with it.

You can check the status of this request using the DescribeRecord operation. The error \"Parameter validation failed: Missing required parameter in Tags[N]:Value\" indicates that your request contains a tag which has a tag key but no corresponding tag value (value is empty or null). Your call may have included values returned from a DescribeProvisioningParameters call that resulted in a TagOption key with an empty list. This happens when TagOption keys are in conflict. For more information, see DescribeProvisioningParameters.

" }, "RejectPortfolioShare":{ "name":"RejectPortfolioShare", @@ -578,7 +700,8 @@ "errors":[ {"shape":"InvalidParametersException"}, {"shape":"ResourceNotFoundException"}, - {"shape":"LimitExceededException"} + {"shape":"LimitExceededException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Updates the specified portfolio's details. This operation does not work with a product that has been shared with you.

" }, @@ -592,7 +715,8 @@ "output":{"shape":"UpdateProductOutput"}, "errors":[ {"shape":"ResourceNotFoundException"}, - {"shape":"InvalidParametersException"} + {"shape":"InvalidParametersException"}, + {"shape":"TagOptionNotMigratedException"} ], "documentation":"

Updates an existing product.

" }, @@ -623,6 +747,22 @@ {"shape":"InvalidParametersException"} ], "documentation":"

Updates an existing provisioning artifact's information. This operation does not work on a provisioning artifact associated with a product that has been shared with you.

" + }, + "UpdateTagOption":{ + "name":"UpdateTagOption", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTagOptionInput"}, + "output":{"shape":"UpdateTagOptionOutput"}, + "errors":[ + {"shape":"TagOptionNotMigratedException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"DuplicateResourceException"}, + {"shape":"InvalidParametersException"} + ], + "documentation":"

Updates an existing TagOption.

" } }, "shapes":{ @@ -633,7 +773,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -698,7 +838,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -728,7 +868,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -749,6 +889,28 @@ "members":{ } }, + "AssociateTagOptionWithResourceInput":{ + "type":"structure", + "required":[ + "ResourceId", + "TagOptionId" + ], + "members":{ + "ResourceId":{ + "shape":"ResourceId", + "documentation":"

The resource identifier.

" + }, + "TagOptionId":{ + "shape":"TagOptionId", + "documentation":"

The TagOption identifier.

" + } + } + }, + "AssociateTagOptionWithResourceOutput":{ + "type":"structure", + "members":{ + } + }, "AttributeValue":{"type":"string"}, "ConstraintDescription":{ "type":"string", @@ -804,6 +966,69 @@ "max":1024, "min":1 }, + "CopyOption":{ + "type":"string", + "enum":["CopyTags"] + }, + "CopyOptions":{ + "type":"list", + "member":{"shape":"CopyOption"} + }, + "CopyProductInput":{ + "type":"structure", + "required":[ + "SourceProductArn", + "IdempotencyToken" + ], + "members":{ + "AcceptLanguage":{ + "shape":"AcceptLanguage", + "documentation":"

The language code.

" + }, + "SourceProductArn":{ + "shape":"ProductArn", + "documentation":"

The Amazon Resource Name (ARN) of the source product.

" + }, + "TargetProductId":{ + "shape":"Id", + "documentation":"

The ID of the target product. By default, a new product is created.

" + }, + "TargetProductName":{ + "shape":"ProductViewName", + "documentation":"

A name for the target product. The default is the name of the source product.

" + }, + "SourceProvisioningArtifactIdentifiers":{ + "shape":"SourceProvisioningArtifactProperties", + "documentation":"

The IDs of the product versions to copy. By default, all provisioning artifacts are copied.

" + }, + "CopyOptions":{ + "shape":"CopyOptions", + "documentation":"

The copy options. If the value is CopyTags, the tags from the source product are copied to the target product.

" + }, + "IdempotencyToken":{ + "shape":"IdempotencyToken", + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", + "idempotencyToken":true + } + } + }, + "CopyProductOutput":{ + "type":"structure", + "members":{ + "CopyProductToken":{ + "shape":"Id", + "documentation":"

A unique token to pass to DescribeCopyProductStatus to track the progress of the operation.

" + } + } + }, + "CopyProductStatus":{ + "type":"string", + "enum":[ + "SUCCEEDED", + "IN_PROGRESS", + "FAILED" + ] + }, "CreateConstraintInput":{ "type":"structure", "required":[ @@ -816,7 +1041,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -828,7 +1053,7 @@ }, "Parameters":{ "shape":"ConstraintParameters", - "documentation":"

The constraint parameters. Expected values vary depending on which Type is specified. For examples, see the bottom of this topic.

For Type LAUNCH, the RoleArn property is required.

For Type NOTIFICATION, the NotificationArns property is required.

For Type TEMPLATE, the Rules property is required.

" + "documentation":"

The constraint parameters. Expected values vary depending on which Type is specified. For more information, see the Examples section.

For Type LAUNCH, the RoleArn property is required.

For Type NOTIFICATION, the NotificationArns property is required.

For Type TEMPLATE, the Rules property is required.

" }, "Type":{ "shape":"ConstraintType", @@ -840,7 +1065,7 @@ }, "IdempotencyToken":{ "shape":"IdempotencyToken", - "documentation":"

A token to disambiguate duplicate requests. You can create multiple resources using the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", "idempotencyToken":true } } @@ -872,7 +1097,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "DisplayName":{ "shape":"PortfolioDisplayName", @@ -892,7 +1117,7 @@ }, "IdempotencyToken":{ "shape":"IdempotencyToken", - "documentation":"

A token to disambiguate duplicate requests. You can create multiple resources using the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", "idempotencyToken":true } } @@ -919,7 +1144,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -948,7 +1173,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Name":{ "shape":"ProductViewName", @@ -992,7 +1217,7 @@ }, "IdempotencyToken":{ "shape":"IdempotencyToken", - "documentation":"

A token to disambiguate duplicate requests. You can create multiple resources using the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", "idempotencyToken":true } } @@ -1024,7 +1249,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1036,7 +1261,7 @@ }, "IdempotencyToken":{ "shape":"IdempotencyToken", - "documentation":"

A token to disambiguate duplicate requests. You can create multiple resources using the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

", "idempotencyToken":true } } @@ -1058,6 +1283,32 @@ } } }, + "CreateTagOptionInput":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{ + "shape":"TagOptionKey", + "documentation":"

The TagOption key.

" + }, + "Value":{ + "shape":"TagOptionValue", + "documentation":"

The TagOption value.

" + } + } + }, + "CreateTagOptionOutput":{ + "type":"structure", + "members":{ + "TagOptionDetail":{ + "shape":"TagOptionDetail", + "documentation":"

The resulting detailed TagOption information.

" + } + } + }, "CreatedTime":{"type":"timestamp"}, "CreationTime":{"type":"timestamp"}, "DefaultValue":{"type":"string"}, @@ -1067,7 +1318,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1086,7 +1337,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1108,7 +1359,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -1131,7 +1382,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1153,7 +1404,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1176,7 +1427,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1201,13 +1452,44 @@ } } }, + "DescribeCopyProductStatusInput":{ + "type":"structure", + "required":["CopyProductToken"], + "members":{ + "AcceptLanguage":{ + "shape":"AcceptLanguage", + "documentation":"

The language code.

" + }, + "CopyProductToken":{ + "shape":"Id", + "documentation":"

The token returned from the call to CopyProduct that initiated the operation.

" + } + } + }, + "DescribeCopyProductStatusOutput":{ + "type":"structure", + "members":{ + "CopyProductStatus":{ + "shape":"CopyProductStatus", + "documentation":"

The status of the copy product operation.

" + }, + "TargetProductId":{ + "shape":"Id", + "documentation":"

The ID of the copied product.

" + }, + "StatusDetail":{ + "shape":"StatusDetail", + "documentation":"

The status message.

" + } + } + }, "DescribePortfolioInput":{ "type":"structure", "required":["Id"], "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1225,6 +1507,10 @@ "Tags":{ "shape":"Tags", "documentation":"

Tags associated with the portfolio.

" + }, + "TagOptions":{ + "shape":"TagOptionDetails", + "documentation":"

TagOptions associated with the portfolio.

" } } }, @@ -1234,7 +1520,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1256,6 +1542,10 @@ "Tags":{ "shape":"Tags", "documentation":"

Tags associated with the product.

" + }, + "TagOptions":{ + "shape":"TagOptionDetails", + "documentation":"

List of TagOptions associated with the product.

" } } }, @@ -1265,7 +1555,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1292,7 +1582,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1319,7 +1609,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1345,7 +1635,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProvisioningArtifactId":{ "shape":"Id", @@ -1357,7 +1647,7 @@ }, "Verbose":{ "shape":"Verbose", - "documentation":"

Selects verbose results. If set to true, the CloudFormation template is returned.

" + "documentation":"

Enable a verbose level of details for the provisioning artifact.

" } } }, @@ -1387,7 +1677,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1417,6 +1707,10 @@ "UsageInstructions":{ "shape":"UsageInstructions", "documentation":"

Any additional metadata specifically related to the provisioning of the product. For example, see the Version field of the CloudFormation template.

" + }, + "TagOptions":{ + "shape":"TagOptionSummaries", + "documentation":"

List of TagOptions associated with the provisioned provisioning parameters.

" } } }, @@ -1426,7 +1720,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -1459,6 +1753,25 @@ } } }, + "DescribeTagOptionInput":{ + "type":"structure", + "required":["Id"], + "members":{ + "Id":{ + "shape":"TagOptionId", + "documentation":"

The identifier of the TagOption.

" + } + } + }, + "DescribeTagOptionOutput":{ + "type":"structure", + "members":{ + "TagOptionDetail":{ + "shape":"TagOptionDetail", + "documentation":"

The resulting detailed TagOption information.

" + } + } + }, "Description":{"type":"string"}, "DisassociatePrincipalFromPortfolioInput":{ "type":"structure", @@ -1469,7 +1782,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -1495,7 +1808,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1512,6 +1825,28 @@ "members":{ } }, + "DisassociateTagOptionFromResourceInput":{ + "type":"structure", + "required":[ + "ResourceId", + "TagOptionId" + ], + "members":{ + "ResourceId":{ + "shape":"ResourceId", + "documentation":"

Identifier of the resource from which to disassociate the TagOption.

" + }, + "TagOptionId":{ + "shape":"TagOptionId", + "documentation":"

Identifier of the TagOption to disassociate from the resource.

" + } + } + }, + "DisassociateTagOptionFromResourceOutput":{ + "type":"structure", + "members":{ + } + }, "DuplicateResourceException":{ "type":"structure", "members":{ @@ -1543,6 +1878,13 @@ "documentation":"

One or more parameters provided to the operation are invalid.

", "exception":true }, + "InvalidStateException":{ + "type":"structure", + "members":{ + }, + "documentation":"

An attempt was made to modify a resource that is in an invalid state. Inspect the resource you are using for this operation to ensure that all resource states are valid before retrying the operation.

", + "exception":true + }, "LastRequestId":{"type":"string"}, "LaunchPathSummaries":{ "type":"list", @@ -1582,7 +1924,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PageToken":{ "shape":"PageToken", @@ -1613,7 +1955,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -1652,11 +1994,11 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", - "documentation":"

The product identifier.. Identifies the product for which to retrieve LaunchPathSummaries information.

" + "documentation":"

The product identifier. Identifies the product for which to retrieve LaunchPathSummaries information.

" }, "PageSize":{ "shape":"PageSize", @@ -1687,7 +2029,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -1714,7 +2056,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1748,7 +2090,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PageToken":{ "shape":"PageToken", @@ -1779,7 +2121,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -1814,7 +2156,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -1840,7 +2182,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "AccessLevelFilter":{ "shape":"AccessLevelFilter", @@ -1887,6 +2229,89 @@ }, "documentation":"

The search filter to limit results when listing request history records.

" }, + "ListResourcesForTagOptionInput":{ + "type":"structure", + "required":["TagOptionId"], + "members":{ + "TagOptionId":{ + "shape":"TagOptionId", + "documentation":"

Identifier of the TagOption.

" + }, + "ResourceType":{ + "shape":"ResourceType", + "documentation":"

Resource type.

" + }, + "PageSize":{ + "shape":"PageSize", + "documentation":"

The maximum number of items to return in the results. If more results exist than fit in the specified PageSize, the value of NextPageToken in the response is non-null.

" + }, + "PageToken":{ + "shape":"PageToken", + "documentation":"

The page token of the first page retrieved. If null, this retrieves the first page of size PageSize.

" + } + } + }, + "ListResourcesForTagOptionOutput":{ + "type":"structure", + "members":{ + "ResourceDetails":{ + "shape":"ResourceDetails", + "documentation":"

The resulting detailed resource information.

" + }, + "PageToken":{ + "shape":"PageToken", + "documentation":"

The page token of the first page retrieved. If null, this retrieves the first page of size PageSize.

" + } + } + }, + "ListTagOptionsFilters":{ + "type":"structure", + "members":{ + "Key":{ + "shape":"TagOptionKey", + "documentation":"

The ListTagOptionsFilters key.

" + }, + "Value":{ + "shape":"TagOptionValue", + "documentation":"

The ListTagOptionsFilters value.

" + }, + "Active":{ + "shape":"TagOptionActive", + "documentation":"

The ListTagOptionsFilters active state.

" + } + }, + "documentation":"

The ListTagOptions filters.

" + }, + "ListTagOptionsInput":{ + "type":"structure", + "members":{ + "Filters":{ + "shape":"ListTagOptionsFilters", + "documentation":"

The list of filters with which to limit search results. If no search filters are specified, the output is all TagOptions.

" + }, + "PageSize":{ + "shape":"PageSize", + "documentation":"

The maximum number of items to return in the results. If more results exist than fit in the specified PageSize, the value of NextPageToken in the response is non-null.

" + }, + "PageToken":{ + "shape":"PageToken", + "documentation":"

The page token of the first page retrieved. If null, this retrieves the first page of size PageSize.

" + } + } + }, + "ListTagOptionsOutput":{ + "type":"structure", + "members":{ + "TagOptionDetails":{ + "shape":"TagOptionDetails", + "documentation":"

The resulting detailed TagOption information.

" + }, + "PageToken":{ + "shape":"PageToken", + "documentation":"

The page token of the first page retrieved. If null, this retrieves the first page of size PageSize.

" + } + } + }, "NoEcho":{"type":"boolean"}, "NotificationArn":{ "type":"string", @@ -2001,6 +2426,12 @@ "type":"list", "member":{"shape":"Principal"} }, + "ProductArn":{ + "type":"string", + "max":1224, + "min":1, + "pattern":"arn:[a-z0-9-\\.]{1,63}:[a-z0-9-\\.]{0,63}:[a-z0-9-\\.]{0,63}:[a-z0-9-\\.]{0,63}:[^/].{0,1023}" + }, "ProductSource":{ "type":"string", "enum":["ACCOUNT"] @@ -2163,7 +2594,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -2242,7 +2673,7 @@ }, "IdempotencyToken":{ "shape":"IdempotencyToken", - "documentation":"

A token to disambiguate duplicate requests. You can create multiple resources using the same input in multiple requests, provided that you also specify a different idempotency token for each request.

" + "documentation":"

A token to disambiguate duplicate requests. You can use the same input in multiple requests, provided that you also specify a different idempotency token for each request.

" }, "LastRecordId":{ "shape":"LastRequestId", @@ -2400,6 +2831,11 @@ }, "documentation":"

Provisioning artifact properties. For example request JSON, see CreateProvisioningArtifact.

" }, + "ProvisioningArtifactPropertyName":{ + "type":"string", + "enum":["Id"] + }, + "ProvisioningArtifactPropertyValue":{"type":"string"}, "ProvisioningArtifactSummaries":{ "type":"list", "member":{"shape":"ProvisioningArtifactSummary"} @@ -2409,15 +2845,15 @@ "members":{ "Id":{ "shape":"Id", - "documentation":"

The provisioning artifact identifier.

" + "documentation":"

The identifier of the provisioning artifact.

" }, "Name":{ "shape":"ProvisioningArtifactName", - "documentation":"

The provisioning artifact name.

" + "documentation":"

The name of the provisioning artifact.

" }, "Description":{ "shape":"ProvisioningArtifactDescription", - "documentation":"

The provisioning artifact description.

" + "documentation":"

The description of the provisioning artifact.

" }, "CreatedTime":{ "shape":"ProvisioningArtifactCreatedTime", @@ -2428,7 +2864,7 @@ "documentation":"

The provisioning artifact metadata. This data is used with products created by AWS Marketplace.

" } }, - "documentation":"

Summary information about a provisioning artifact.

" + "documentation":"

Stores summary information about a provisioning artifact.

" }, "ProvisioningArtifactType":{ "type":"string", @@ -2473,7 +2909,7 @@ }, "Status":{ "shape":"RecordStatus", - "documentation":"

The status of the ProvisionedProduct object.

CREATED - Request created but the operation has not yet started.

IN_PROGRESS - The requested operation is in-progress.

IN_PROGRESS_IN_ERROR - The provisioned product is under change but the requested operation failed and some remediation is occurring. For example, a roll-back.

SUCCEEDED - The requested operation has successfully completed.

FAILED - The requested operation has completed but has failed. Investigate using the error messages returned.

" + "documentation":"

The status of the ProvisionedProduct object.

CREATED - Request created but the operation has not yet started.

IN_PROGRESS - The requested operation is in-progress.

IN_PROGRESS_IN_ERROR - The provisioned product is under change but the requested operation failed and some remediation is occurring. For example, a rollback.

SUCCEEDED - The requested operation has successfully completed.

FAILED - The requested operation has completed but has failed. Investigate using the error messages returned.

" }, "CreatedTime":{ "shape":"CreatedTime", @@ -2610,7 +3046,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -2628,6 +3064,42 @@ "max":150, "min":1 }, + "ResourceDetail":{ + "type":"structure", + "members":{ + "Id":{ + "shape":"ResourceDetailId", + "documentation":"

Identifier of the resource.

" + }, + "ARN":{ + "shape":"ResourceDetailARN", + "documentation":"

ARN of the resource.

" + }, + "Name":{ + "shape":"ResourceDetailName", + "documentation":"

Name of the resource.

" + }, + "Description":{ + "shape":"ResourceDetailDescription", + "documentation":"

Description of the resource.

" + }, + "CreatedTime":{ + "shape":"ResourceDetailCreatedTime", + "documentation":"

Creation time of the resource.

" + } + }, + "documentation":"

Detailed resource information.

" + }, + "ResourceDetailARN":{"type":"string"}, + "ResourceDetailCreatedTime":{"type":"timestamp"}, + "ResourceDetailDescription":{"type":"string"}, + "ResourceDetailId":{"type":"string"}, + "ResourceDetailName":{"type":"string"}, + "ResourceDetails":{ + "type":"list", + "member":{"shape":"ResourceDetail"} + }, + "ResourceId":{"type":"string"}, "ResourceInUseException":{ "type":"structure", "members":{ @@ -2642,12 +3114,13 @@ "documentation":"

The specified resource was not found.

", "exception":true }, + "ResourceType":{"type":"string"}, "ScanProvisionedProductsInput":{ "type":"structure", "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "AccessLevelFilter":{ "shape":"AccessLevelFilter", @@ -2683,7 +3156,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "PortfolioId":{ "shape":"Id", @@ -2733,7 +3206,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Filters":{ "shape":"ProductViewFilters", @@ -2781,6 +3254,15 @@ "DESCENDING" ] }, + "SourceProvisioningArtifactProperties":{ + "type":"list", + "member":{"shape":"SourceProvisioningArtifactPropertiesMap"} + }, + "SourceProvisioningArtifactPropertiesMap":{ + "type":"map", + "key":{"shape":"ProvisioningArtifactPropertyName"}, + "value":{"shape":"ProvisioningArtifactPropertyValue"} + }, "Status":{ "type":"string", "enum":[ @@ -2789,6 +3271,7 @@ "FAILED" ] }, + "StatusDetail":{"type":"string"}, "SupportDescription":{"type":"string"}, "SupportEmail":{"type":"string"}, "SupportUrl":{"type":"string"}, @@ -2808,7 +3291,7 @@ "documentation":"

The desired value for this key.

" } }, - "documentation":"

Key/value pairs to associate with this provisioning. These tags are entirely discretionary and are propagated to the resources created in the provisioning.

" + "documentation":"

Key-value pairs to associate with this provisioning. These tags are entirely discretionary and are propagated to the resources created in the provisioning.

" }, "TagKey":{ "type":"string", @@ -2820,6 +3303,79 @@ "type":"list", "member":{"shape":"TagKey"} }, + "TagOptionActive":{"type":"boolean"}, + "TagOptionDetail":{ + "type":"structure", + "members":{ + "Key":{ + "shape":"TagOptionKey", + "documentation":"

The TagOptionDetail key.

" + }, + "Value":{ + "shape":"TagOptionValue", + "documentation":"

The TagOptionDetail value.

" + }, + "Active":{ + "shape":"TagOptionActive", + "documentation":"

The TagOptionDetail active state.

" + }, + "Id":{ + "shape":"TagOptionId", + "documentation":"

The TagOptionDetail identifier.

" + } + }, + "documentation":"

The TagOption details.

" + }, + "TagOptionDetails":{ + "type":"list", + "member":{"shape":"TagOptionDetail"} + }, + "TagOptionId":{ + "type":"string", + "max":100, + "min":1 + }, + "TagOptionKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "TagOptionNotMigratedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

An operation requiring TagOptions failed because the TagOptions migration process has not been performed for this account. Please use the AWS console to perform the migration process before retrying the operation.

", + "exception":true + }, + "TagOptionSummaries":{ + "type":"list", + "member":{"shape":"TagOptionSummary"} + }, + "TagOptionSummary":{ + "type":"structure", + "members":{ + "Key":{ + "shape":"TagOptionKey", + "documentation":"

The TagOptionSummary key.

" + }, + "Values":{ + "shape":"TagOptionValues", + "documentation":"

The TagOptionSummary value.

" + } + }, + "documentation":"

The TagOption summary key-value pair.

" + }, + "TagOptionValue":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "TagOptionValues":{ + "type":"list", + "member":{"shape":"TagOptionValue"} + }, "TagValue":{ "type":"string", "max":256, @@ -2854,7 +3410,7 @@ }, "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" } } }, @@ -2873,7 +3429,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -2908,7 +3464,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -2955,7 +3511,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "Id":{ "shape":"Id", @@ -3018,7 +3574,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProvisionedProductName":{ "shape":"ProvisionedProductNameOrArn", @@ -3069,7 +3625,7 @@ "members":{ "AcceptLanguage":{ "shape":"AcceptLanguage", - "documentation":"

The language code to use for this operation. Supported language codes are as follows:

\"en\" (English)

\"jp\" (Japanese)

\"zh\" (Chinese)

If no code is specified, \"en\" is used as the default.

" + "documentation":"

The language code.

" }, "ProductId":{ "shape":"Id", @@ -3128,6 +3684,33 @@ "type":"list", "member":{"shape":"UpdateProvisioningParameter"} }, + "UpdateTagOptionInput":{ + "type":"structure", + "required":["Id"], + "members":{ + "Id":{ + "shape":"TagOptionId", + "documentation":"

The identifier of the constraint to update.

" + }, + "Value":{ + "shape":"TagOptionValue", + "documentation":"

The updated value.

" + }, + "Active":{ + "shape":"TagOptionActive", + "documentation":"

The updated active state.

" + } + } + }, + "UpdateTagOptionOutput":{ + "type":"structure", + "members":{ + "TagOptionDetail":{ + "shape":"TagOptionDetail", + "documentation":"

The resulting detailed TagOption information.

" + } + } + }, "UpdatedTime":{"type":"timestamp"}, "UsageInstruction":{ "type":"structure", diff --git a/services/ses/src/main/resources/codegen-resources/customization.config b/services/ses/src/main/resources/codegen-resources/customization.config index 20c870193cf0..bf811eb1f2ec 100644 --- a/services/ses/src/main/resources/codegen-resources/customization.config +++ b/services/ses/src/main/resources/codegen-resources/customization.config @@ -57,5 +57,6 @@ ] } }, - "verifiedSimpleMethods" : ["setActiveReceiptRuleSet"] + "verifiedSimpleMethods" : ["setActiveReceiptRuleSet"], + "blacklistedSimpleMethods" : ["updateAccountSendingEnabled"] } \ No newline at end of file diff --git a/services/ses/src/main/resources/codegen-resources/examples-1.json b/services/ses/src/main/resources/codegen-resources/examples-1.json index 88555294c715..e56903308d11 100644 --- a/services/ses/src/main/resources/codegen-resources/examples-1.json +++ b/services/ses/src/main/resources/codegen-resources/examples-1.json @@ -293,6 +293,22 @@ "title": "DescribeReceiptRuleSet" } ], + "GetAccountSendingEnabled": [ + { + "output": { + "Enabled": true + }, + "comments": { + "input": { + }, + "output": { + } + }, + "description": "The following example returns if sending status for an account is enabled. (true / false):", + "id": "getaccountsendingenabled-1469047741333", + "title": "GetAccountSendingEnabled" + } + ], "GetIdentityDkimAttributes": [ { "input": { @@ -367,6 +383,8 @@ "NotificationAttributes": { "example.com": { "BounceTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:ExampleTopic", + "ComplaintTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:ExampleTopic", + "DeliveryTopic": "arn:aws:sns:us-east-1:EXAMPLE65304:ExampleTopic", "ForwardingEnabled": true, "HeadersInBounceNotificationsEnabled": false, "HeadersInComplaintNotificationsEnabled": false, @@ -845,6 +863,56 @@ "title": "SetReceiptRulePosition" } ], + "UpdateAccountSendingEnabled": [ + { + "input": { + "Enabled": true + }, + "comments": { + "input": { + }, + "output": { + } + }, + "description": "The following example updated the sending status for this account.", + "id": "updateaccountsendingenabled-1469047741333", + "title": "UpdateAccountSendingEnabled" + } + ], + "UpdateConfigurationSetReputationMetricsEnabled": [ + { + "input": { + "ConfigurationSetName": "foo", + "Enabled": true + }, + "comments": { + "input": { + }, + "output": { + } + }, + "description": "Set the reputationMetricsEnabled flag for a specific configuration set.", + "id": "updateconfigurationsetreputationmetricsenabled-2362747741333", + "title": "UpdateConfigurationSetReputationMetricsEnabled" + } + ], + "UpdateConfigurationSetSendingEnabled": [ + { + "input": { + "ConfigurationSetName": "foo", + "Enabled": true + }, + "comments": { + "input": { + }, + "output": { + } + }, + "description": "Set the sending enabled flag for a specific configuration set.", + "id": "updateconfigurationsetsendingenabled-2362747741333", + "title": "UpdateConfigurationSetReputationMetricsEnabled" + } + ], "UpdateReceiptRule": [ { "input": { diff --git a/services/ses/src/main/resources/codegen-resources/service-2.json b/services/ses/src/main/resources/codegen-resources/service-2.json index a9ec298fc733..405782672bca 100644 --- a/services/ses/src/main/resources/codegen-resources/service-2.json +++ b/services/ses/src/main/resources/codegen-resources/service-2.json @@ -1,14 +1,15 @@ { "version":"2.0", "metadata":{ - "uid":"email-2010-12-01", "apiVersion":"2010-12-01", "endpointPrefix":"email", "protocol":"query", "serviceAbbreviation":"Amazon SES", "serviceFullName":"Amazon Simple Email Service", + "serviceId":"SES", "signatureVersion":"v4", "signingName":"ses", + "uid":"email-2010-12-01", "xmlNamespace":"http://ses.amazonaws.com/doc/2010-12-01/" }, "operations":{ @@ -28,7 +29,7 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates a receipt rule set by cloning an existing one. All receipt rules and configurations are copied to the new receipt rule set and are completely independent of the source rule set.

For information about setting up rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates a receipt rule set by cloning an existing one. All receipt rules and configurations are copied to the new receipt rule set and are completely independent of the source rule set.

For information about setting up rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "CreateConfigurationSet":{ "name":"CreateConfigurationSet", @@ -46,7 +47,7 @@ {"shape":"InvalidConfigurationSetException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates a configuration set.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates a configuration set.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "CreateConfigurationSetEventDestination":{ "name":"CreateConfigurationSetEventDestination", @@ -64,9 +65,28 @@ {"shape":"EventDestinationAlreadyExistsException"}, {"shape":"InvalidCloudWatchDestinationException"}, {"shape":"InvalidFirehoseDestinationException"}, + {"shape":"InvalidSNSDestinationException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates a configuration set event destination.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be either Amazon CloudWatch or Amazon Kinesis Firehose.

An event destination is the AWS service to which Amazon SES publishes the email sending events associated with a configuration set. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates a configuration set event destination.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be Amazon CloudWatch, Amazon Kinesis Firehose, or Amazon Simple Notification Service (Amazon SNS).

An event destination is the AWS service to which Amazon SES publishes the email sending events associated with a configuration set. For information about using configuration sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "CreateConfigurationSetTrackingOptions":{ + "name":"CreateConfigurationSetTrackingOptions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateConfigurationSetTrackingOptionsRequest"}, + "output":{ + "shape":"CreateConfigurationSetTrackingOptionsResponse", + "resultWrapper":"CreateConfigurationSetTrackingOptionsResult" + }, + "errors":[ + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"TrackingOptionsAlreadyExistsException"}, + {"shape":"InvalidTrackingOptionsException"} + ], + "documentation":"

Creates an association between a configuration set and a custom domain for open and click event tracking.

By default, images and links used for tracking open and click events are hosted on domains operated by Amazon SES. You can configure a subdomain of your own to handle these events. For information about using configuration sets, see Configuring Custom Domains to Handle Open and Click Tracking in the Amazon SES Developer Guide.

" }, "CreateReceiptFilter":{ "name":"CreateReceiptFilter", @@ -83,7 +103,7 @@ {"shape":"LimitExceededException"}, {"shape":"AlreadyExistsException"} ], - "documentation":"

Creates a new IP address filter.

For information about setting up IP address filters, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates a new IP address filter.

For information about setting up IP address filters, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "CreateReceiptRule":{ "name":"CreateReceiptRule", @@ -105,7 +125,7 @@ {"shape":"RuleSetDoesNotExistException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates a receipt rule.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates a receipt rule.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "CreateReceiptRuleSet":{ "name":"CreateReceiptRuleSet", @@ -122,7 +142,25 @@ {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Creates an empty receipt rule set.

For information about setting up receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Creates an empty receipt rule set.

For information about setting up receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "CreateTemplate":{ + "name":"CreateTemplate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateTemplateRequest"}, + "output":{ + "shape":"CreateTemplateResponse", + "resultWrapper":"CreateTemplateResult" + }, + "errors":[ + {"shape":"AlreadyExistsException"}, + {"shape":"InvalidTemplateException"}, + {"shape":"LimitExceededException"} + ], + "documentation":"

Creates an email template. Email templates enable you to send personalized email to one or more destinations in a single API operation. For more information, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DeleteConfigurationSet":{ "name":"DeleteConfigurationSet", @@ -138,7 +176,7 @@ "errors":[ {"shape":"ConfigurationSetDoesNotExistException"} ], - "documentation":"

Deletes a configuration set.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes a configuration set. Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DeleteConfigurationSetEventDestination":{ "name":"DeleteConfigurationSetEventDestination", @@ -155,7 +193,24 @@ {"shape":"ConfigurationSetDoesNotExistException"}, {"shape":"EventDestinationDoesNotExistException"} ], - "documentation":"

Deletes a configuration set event destination.

Configuration set event destinations are associated with configuration sets, which enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes a configuration set event destination. Configuration set event destinations are associated with configuration sets, which enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "DeleteConfigurationSetTrackingOptions":{ + "name":"DeleteConfigurationSetTrackingOptions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteConfigurationSetTrackingOptionsRequest"}, + "output":{ + "shape":"DeleteConfigurationSetTrackingOptionsResponse", + "resultWrapper":"DeleteConfigurationSetTrackingOptionsResult" + }, + "errors":[ + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"TrackingOptionsDoesNotExistException"} + ], + "documentation":"

Deletes an association between a configuration set and a custom domain for open and click event tracking.

By default, images and links used for tracking open and click events are hosted on domains operated by Amazon SES. You can configure a subdomain of your own to handle these events. For information about using configuration sets, see Configuring Custom Domains to Handle Open and Click Tracking in the Amazon SES Developer Guide.

Deleting this kind of association will result in emails sent using the specified configuration set to capture open and click events using the standard, Amazon SES-operated domains.

" }, "DeleteIdentity":{ "name":"DeleteIdentity", @@ -168,7 +223,7 @@ "shape":"DeleteIdentityResponse", "resultWrapper":"DeleteIdentityResult" }, - "documentation":"

Deletes the specified identity (an email address or a domain) from the list of verified identities.

This action is throttled at one request per second.

" + "documentation":"

Deletes the specified identity (an email address or a domain) from the list of verified identities.

You can execute this operation no more than once per second.

" }, "DeleteIdentityPolicy":{ "name":"DeleteIdentityPolicy", @@ -181,7 +236,7 @@ "shape":"DeleteIdentityPolicyResponse", "resultWrapper":"DeleteIdentityPolicyResult" }, - "documentation":"

Deletes the specified sending authorization policy for the given identity (an email address or a domain). This API returns successfully even if a policy with the specified name does not exist.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes the specified sending authorization policy for the given identity (an email address or a domain). This API returns successfully even if a policy with the specified name does not exist.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DeleteReceiptFilter":{ "name":"DeleteReceiptFilter", @@ -194,7 +249,7 @@ "shape":"DeleteReceiptFilterResponse", "resultWrapper":"DeleteReceiptFilterResult" }, - "documentation":"

Deletes the specified IP address filter.

For information about managing IP address filters, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes the specified IP address filter.

For information about managing IP address filters, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DeleteReceiptRule":{ "name":"DeleteReceiptRule", @@ -210,7 +265,7 @@ "errors":[ {"shape":"RuleSetDoesNotExistException"} ], - "documentation":"

Deletes the specified receipt rule.

For information about managing receipt rules, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes the specified receipt rule.

For information about managing receipt rules, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DeleteReceiptRuleSet":{ "name":"DeleteReceiptRuleSet", @@ -226,7 +281,20 @@ "errors":[ {"shape":"CannotDeleteException"} ], - "documentation":"

Deletes the specified receipt rule set and all of the receipt rules it contains.

The currently active rule set cannot be deleted.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Deletes the specified receipt rule set and all of the receipt rules it contains.

The currently active rule set cannot be deleted.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "DeleteTemplate":{ + "name":"DeleteTemplate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteTemplateRequest"}, + "output":{ + "shape":"DeleteTemplateResponse", + "resultWrapper":"DeleteTemplateResult" + }, + "documentation":"

Deletes an email template.

You can execute this operation no more than once per second.

" }, "DeleteVerifiedEmailAddress":{ "name":"DeleteVerifiedEmailAddress", @@ -235,7 +303,7 @@ "requestUri":"/" }, "input":{"shape":"DeleteVerifiedEmailAddressRequest"}, - "documentation":"

Deletes the specified email address from the list of verified addresses.

The DeleteVerifiedEmailAddress action is deprecated as of the May 15, 2012 release of Domain Verification. The DeleteIdentity action is now preferred.

This action is throttled at one request per second.

" + "documentation":"

Deprecated. Use the DeleteIdentity operation to delete email addresses and domains.

" }, "DescribeActiveReceiptRuleSet":{ "name":"DescribeActiveReceiptRuleSet", @@ -248,7 +316,7 @@ "shape":"DescribeActiveReceiptRuleSetResponse", "resultWrapper":"DescribeActiveReceiptRuleSetResult" }, - "documentation":"

Returns the metadata and receipt rules for the receipt rule set that is currently active.

For information about setting up receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns the metadata and receipt rules for the receipt rule set that is currently active.

For information about setting up receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DescribeConfigurationSet":{ "name":"DescribeConfigurationSet", @@ -264,7 +332,7 @@ "errors":[ {"shape":"ConfigurationSetDoesNotExistException"} ], - "documentation":"

Returns the details of the specified configuration set.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns the details of the specified configuration set. For information about using configuration sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DescribeReceiptRule":{ "name":"DescribeReceiptRule", @@ -281,7 +349,7 @@ {"shape":"RuleDoesNotExistException"}, {"shape":"RuleSetDoesNotExistException"} ], - "documentation":"

Returns the details of the specified receipt rule.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns the details of the specified receipt rule.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "DescribeReceiptRuleSet":{ "name":"DescribeReceiptRuleSet", @@ -297,7 +365,19 @@ "errors":[ {"shape":"RuleSetDoesNotExistException"} ], - "documentation":"

Returns the details of the specified receipt rule set.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns the details of the specified receipt rule set.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "GetAccountSendingEnabled":{ + "name":"GetAccountSendingEnabled", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "output":{ + "shape":"GetAccountSendingEnabledResponse", + "resultWrapper":"GetAccountSendingEnabledResult" + }, + "documentation":"

Returns the email sending status of the Amazon SES account.

You can execute this operation no more than once per second.

" }, "GetIdentityDkimAttributes":{ "name":"GetIdentityDkimAttributes", @@ -310,7 +390,7 @@ "shape":"GetIdentityDkimAttributesResponse", "resultWrapper":"GetIdentityDkimAttributesResult" }, - "documentation":"

Returns the current status of Easy DKIM signing for an entity. For domain name identities, this action also returns the DKIM tokens that are required for Easy DKIM signing, and whether Amazon SES has successfully verified that these tokens have been published.

This action takes a list of identities as input and returns the following information for each:

This action is throttled at one request per second and can only get DKIM attributes for up to 100 identities at a time.

For more information about creating DNS records using DKIM tokens, go to the Amazon SES Developer Guide.

" + "documentation":"

Returns the current status of Easy DKIM signing for an entity. For domain name identities, this operation also returns the DKIM tokens that are required for Easy DKIM signing, and whether Amazon SES has successfully verified that these tokens have been published.

This operation takes a list of identities as input and returns the following information for each:

This operation is throttled at one request per second and can only get DKIM attributes for up to 100 identities at a time.

For more information about creating DNS records using DKIM tokens, go to the Amazon SES Developer Guide.

" }, "GetIdentityMailFromDomainAttributes":{ "name":"GetIdentityMailFromDomainAttributes", @@ -323,7 +403,7 @@ "shape":"GetIdentityMailFromDomainAttributesResponse", "resultWrapper":"GetIdentityMailFromDomainAttributesResult" }, - "documentation":"

Returns the custom MAIL FROM attributes for a list of identities (email addresses and/or domains).

This action is throttled at one request per second and can only get custom MAIL FROM attributes for up to 100 identities at a time.

" + "documentation":"

Returns the custom MAIL FROM attributes for a list of identities (email addresses : domains).

This operation is throttled at one request per second and can only get custom MAIL FROM attributes for up to 100 identities at a time.

" }, "GetIdentityNotificationAttributes":{ "name":"GetIdentityNotificationAttributes", @@ -336,7 +416,7 @@ "shape":"GetIdentityNotificationAttributesResponse", "resultWrapper":"GetIdentityNotificationAttributesResult" }, - "documentation":"

Given a list of verified identities (email addresses and/or domains), returns a structure describing identity notification attributes.

This action is throttled at one request per second and can only get notification attributes for up to 100 identities at a time.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" + "documentation":"

Given a list of verified identities (email addresses and/or domains), returns a structure describing identity notification attributes.

This operation is throttled at one request per second and can only get notification attributes for up to 100 identities at a time.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" }, "GetIdentityPolicies":{ "name":"GetIdentityPolicies", @@ -349,7 +429,7 @@ "shape":"GetIdentityPoliciesResponse", "resultWrapper":"GetIdentityPoliciesResult" }, - "documentation":"

Returns the requested sending authorization policies for the given identity (an email address or a domain). The policies are returned as a map of policy names to policy contents. You can retrieve a maximum of 20 policies at a time.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns the requested sending authorization policies for the given identity (an email address or a domain). The policies are returned as a map of policy names to policy contents. You can retrieve a maximum of 20 policies at a time.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "GetIdentityVerificationAttributes":{ "name":"GetIdentityVerificationAttributes", @@ -362,7 +442,7 @@ "shape":"GetIdentityVerificationAttributesResponse", "resultWrapper":"GetIdentityVerificationAttributesResult" }, - "documentation":"

Given a list of identities (email addresses and/or domains), returns the verification status and (for domain identities) the verification token for each identity.

This action is throttled at one request per second and can only get verification attributes for up to 100 identities at a time.

" + "documentation":"

Given a list of identities (email addresses and/or domains), returns the verification status and (for domain identities) the verification token for each identity.

The verification status of an email address is \"Pending\" until the email address owner clicks the link within the verification email that Amazon SES sent to that address. If the email address owner clicks the link within 24 hours, the verification status of the email address changes to \"Success\". If the link is not clicked within 24 hours, the verification status changes to \"Failed.\" In that case, if you still want to verify the email address, you must restart the verification process from the beginning.

For domain identities, the domain's verification status is \"Pending\" as Amazon SES searches for the required TXT record in the DNS settings of the domain. When Amazon SES detects the record, the domain's verification status changes to \"Success\". If Amazon SES is unable to detect the record within 72 hours, the domain's verification status changes to \"Failed.\" In that case, if you still want to verify the domain, you must restart the verification process from the beginning.

This operation is throttled at one request per second and can only get verification attributes for up to 100 identities at a time.

" }, "GetSendQuota":{ "name":"GetSendQuota", @@ -374,7 +454,7 @@ "shape":"GetSendQuotaResponse", "resultWrapper":"GetSendQuotaResult" }, - "documentation":"

Returns the user's current sending limits.

This action is throttled at one request per second.

" + "documentation":"

Provides the sending limits for the Amazon SES account.

You can execute this operation no more than once per second.

" }, "GetSendStatistics":{ "name":"GetSendStatistics", @@ -386,7 +466,23 @@ "shape":"GetSendStatisticsResponse", "resultWrapper":"GetSendStatisticsResult" }, - "documentation":"

Returns the user's sending statistics. The result is a list of data points, representing the last two weeks of sending activity.

Each data point in the list contains statistics for a 15-minute interval.

This action is throttled at one request per second.

" + "documentation":"

Provides sending statistics for the Amazon SES account. The result is a list of data points, representing the last two weeks of sending activity. Each data point in the list contains statistics for a 15-minute period of time.

You can execute this operation no more than once per second.

" + }, + "GetTemplate":{ + "name":"GetTemplate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetTemplateRequest"}, + "output":{ + "shape":"GetTemplateResponse", + "resultWrapper":"GetTemplateResult" + }, + "errors":[ + {"shape":"TemplateDoesNotExistException"} + ], + "documentation":"

Displays the template object (which includes the Subject line, HTML part and text part) for the template you specify.

You can execute this operation no more than once per second.

" }, "ListConfigurationSets":{ "name":"ListConfigurationSets", @@ -399,7 +495,7 @@ "shape":"ListConfigurationSetsResponse", "resultWrapper":"ListConfigurationSetsResult" }, - "documentation":"

Lists the configuration sets associated with your AWS account.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second and can return up to 50 configuration sets at a time.

" + "documentation":"

Provides a list of the configuration sets associated with your Amazon SES account. For information about using configuration sets, see Monitoring Your Amazon SES Sending Activity in the Amazon SES Developer Guide.

You can execute this operation no more than once per second. This operation will return up to 1,000 configuration sets each time it is run. If your Amazon SES account has more than 1,000 configuration sets, this operation will also return a NextToken element. You can then execute the ListConfigurationSets operation again, passing the NextToken parameter and the value of the NextToken element to retrieve additional results.

" }, "ListIdentities":{ "name":"ListIdentities", @@ -412,7 +508,7 @@ "shape":"ListIdentitiesResponse", "resultWrapper":"ListIdentitiesResult" }, - "documentation":"

Returns a list containing all of the identities (email addresses and domains) for your AWS account, regardless of verification status.

This action is throttled at one request per second.

" + "documentation":"

Returns a list containing all of the identities (email addresses and domains) for your AWS account, regardless of verification status.

You can execute this operation no more than once per second.

" }, "ListIdentityPolicies":{ "name":"ListIdentityPolicies", @@ -425,7 +521,7 @@ "shape":"ListIdentityPoliciesResponse", "resultWrapper":"ListIdentityPoliciesResult" }, - "documentation":"

Returns a list of sending authorization policies that are attached to the given identity (an email address or a domain). This API returns only a list. If you want the actual policy content, you can use GetIdentityPolicies.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Returns a list of sending authorization policies that are attached to the given identity (an email address or a domain). This API returns only a list. If you want the actual policy content, you can use GetIdentityPolicies.

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "ListReceiptFilters":{ "name":"ListReceiptFilters", @@ -438,7 +534,7 @@ "shape":"ListReceiptFiltersResponse", "resultWrapper":"ListReceiptFiltersResult" }, - "documentation":"

Lists the IP address filters associated with your AWS account.

For information about managing IP address filters, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Lists the IP address filters associated with your AWS account.

For information about managing IP address filters, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "ListReceiptRuleSets":{ "name":"ListReceiptRuleSets", @@ -451,7 +547,20 @@ "shape":"ListReceiptRuleSetsResponse", "resultWrapper":"ListReceiptRuleSetsResult" }, - "documentation":"

Lists the receipt rule sets that exist under your AWS account. If there are additional receipt rule sets to be retrieved, you will receive a NextToken that you can provide to the next call to ListReceiptRuleSets to retrieve the additional entries.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Lists the receipt rule sets that exist under your AWS account. If there are additional receipt rule sets to be retrieved, you will receive a NextToken that you can provide to the next call to ListReceiptRuleSets to retrieve the additional entries.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "ListTemplates":{ + "name":"ListTemplates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTemplatesRequest"}, + "output":{ + "shape":"ListTemplatesResponse", + "resultWrapper":"ListTemplatesResult" + }, + "documentation":"

Lists the email templates present in your Amazon SES account.

You can execute this operation no more than once per second.

" }, "ListVerifiedEmailAddresses":{ "name":"ListVerifiedEmailAddresses", @@ -463,7 +572,7 @@ "shape":"ListVerifiedEmailAddressesResponse", "resultWrapper":"ListVerifiedEmailAddressesResult" }, - "documentation":"

Returns a list containing all of the email addresses that have been verified.

The ListVerifiedEmailAddresses action is deprecated as of the May 15, 2012 release of Domain Verification. The ListIdentities action is now preferred.

This action is throttled at one request per second.

" + "documentation":"

Deprecated. Use the ListIdentities operation to list the email addresses and domains associated with your account.

" }, "PutIdentityPolicy":{ "name":"PutIdentityPolicy", @@ -479,7 +588,7 @@ "errors":[ {"shape":"InvalidPolicyException"} ], - "documentation":"

Adds or updates a sending authorization policy for the specified identity (an email address or a domain).

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Adds or updates a sending authorization policy for the specified identity (an email address or a domain).

This API is for the identity owner only. If you have not verified the identity, this API will return an error.

Sending authorization is a feature that enables an identity owner to authorize other senders to use its identities. For information about using sending authorization, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "ReorderReceiptRuleSet":{ "name":"ReorderReceiptRuleSet", @@ -496,7 +605,7 @@ {"shape":"RuleSetDoesNotExistException"}, {"shape":"RuleDoesNotExistException"} ], - "documentation":"

Reorders the receipt rules within a receipt rule set.

All of the rules in the rule set must be represented in this request. That is, this API will return an error if the reorder request doesn't explicitly position all of the rules.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Reorders the receipt rules within a receipt rule set.

All of the rules in the rule set must be represented in this request. That is, this API will return an error if the reorder request doesn't explicitly position all of the rules.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "SendBounce":{ "name":"SendBounce", @@ -512,7 +621,28 @@ "errors":[ {"shape":"MessageRejected"} ], - "documentation":"

Generates and sends a bounce message to the sender of an email you received through Amazon SES. You can only use this API on an email up to 24 hours after you receive it.

You cannot use this API to send generic bounces for mail that was not received by Amazon SES.

For information about receiving email through Amazon SES, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Generates and sends a bounce message to the sender of an email you received through Amazon SES. You can only use this API on an email up to 24 hours after you receive it.

You cannot use this API to send generic bounces for mail that was not received by Amazon SES.

For information about receiving email through Amazon SES, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "SendBulkTemplatedEmail":{ + "name":"SendBulkTemplatedEmail", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SendBulkTemplatedEmailRequest"}, + "output":{ + "shape":"SendBulkTemplatedEmailResponse", + "resultWrapper":"SendBulkTemplatedEmailResult" + }, + "errors":[ + {"shape":"MessageRejected"}, + {"shape":"MailFromDomainNotVerifiedException"}, + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"TemplateDoesNotExistException"}, + {"shape":"ConfigurationSetSendingPausedException"}, + {"shape":"AccountSendingPausedException"} + ], + "documentation":"

Composes an email message to multiple destinations. The message body is created using an email template.

In order to send email using the SendBulkTemplatedEmail operation, your call to the API must meet the following requirements:

" }, "SendEmail":{ "name":"SendEmail", @@ -528,9 +658,11 @@ "errors":[ {"shape":"MessageRejected"}, {"shape":"MailFromDomainNotVerifiedException"}, - {"shape":"ConfigurationSetDoesNotExistException"} + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"ConfigurationSetSendingPausedException"}, + {"shape":"AccountSendingPausedException"} ], - "documentation":"

Composes an email message based on input data, and then immediately queues the message for sending.

There are several important points to know about SendEmail:

" + "documentation":"

Composes an email message and immediately queues it for sending. In order to send email using the SendEmail operation, your message must meet the following requirements:

For every message that you send, the total number of recipients (including each recipient in the To:, CC: and BCC: fields) is counted against the maximum number of emails you can send in a 24-hour period (your sending quota). For more information about sending quotas in Amazon SES, see Managing Your Amazon SES Sending Limits in the Amazon SES Developer Guide.

" }, "SendRawEmail":{ "name":"SendRawEmail", @@ -546,9 +678,32 @@ "errors":[ {"shape":"MessageRejected"}, {"shape":"MailFromDomainNotVerifiedException"}, - {"shape":"ConfigurationSetDoesNotExistException"} + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"ConfigurationSetSendingPausedException"}, + {"shape":"AccountSendingPausedException"} + ], + "documentation":"

Composes an email message and immediately queues it for sending. When calling this operation, you may specify the message headers as well as the content. The SendRawEmail operation is particularly useful for sending multipart MIME emails (such as those that contain both a plain-text and an HTML version).

In order to send email using the SendRawEmail operation, your message must meet the following requirements:

For every message that you send, the total number of recipients (including each recipient in the To:, CC: and BCC: fields) is counted against the maximum number of emails you can send in a 24-hour period (your sending quota). For more information about sending quotas in Amazon SES, see Managing Your Amazon SES Sending Limits in the Amazon SES Developer Guide.

Additionally, keep the following considerations in mind when using the SendRawEmail operation:

" + }, + "SendTemplatedEmail":{ + "name":"SendTemplatedEmail", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SendTemplatedEmailRequest"}, + "output":{ + "shape":"SendTemplatedEmailResponse", + "resultWrapper":"SendTemplatedEmailResult" + }, + "errors":[ + {"shape":"MessageRejected"}, + {"shape":"MailFromDomainNotVerifiedException"}, + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"TemplateDoesNotExistException"}, + {"shape":"ConfigurationSetSendingPausedException"}, + {"shape":"AccountSendingPausedException"} ], - "documentation":"

Sends an email message, with header and content specified by the client. The SendRawEmail action is useful for sending multipart MIME emails. The raw text of the message must comply with Internet email standards; otherwise, the message cannot be sent.

There are several important points to know about SendRawEmail:

" + "documentation":"

Composes an email message using an email template and immediately queues it for sending.

In order to send email using the SendTemplatedEmail operation, your call to the API must meet the following requirements:

" }, "SetActiveReceiptRuleSet":{ "name":"SetActiveReceiptRuleSet", @@ -564,7 +719,7 @@ "errors":[ {"shape":"RuleSetDoesNotExistException"} ], - "documentation":"

Sets the specified receipt rule set as the active receipt rule set.

To disable your email-receiving through Amazon SES completely, you can call this API with RuleSetName set to null.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Sets the specified receipt rule set as the active receipt rule set.

To disable your email-receiving through Amazon SES completely, you can call this API with RuleSetName set to null.

For information about managing receipt rule sets, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "SetIdentityDkimEnabled":{ "name":"SetIdentityDkimEnabled", @@ -577,7 +732,7 @@ "shape":"SetIdentityDkimEnabledResponse", "resultWrapper":"SetIdentityDkimEnabledResult" }, - "documentation":"

Enables or disables Easy DKIM signing of email sent from an identity:

For email addresses (e.g., user@example.com), you can only enable Easy DKIM signing if the corresponding domain (e.g., example.com) has been set up for Easy DKIM using the AWS Console or the VerifyDomainDkim action.

This action is throttled at one request per second.

For more information about Easy DKIM signing, go to the Amazon SES Developer Guide.

" + "documentation":"

Enables or disables Easy DKIM signing of email sent from an identity:

For email addresses (for example, user@example.com), you can only enable Easy DKIM signing if the corresponding domain (in this case, example.com) has been set up for Easy DKIM using the AWS Console or the VerifyDomainDkim operation.

You can execute this operation no more than once per second.

For more information about Easy DKIM signing, go to the Amazon SES Developer Guide.

" }, "SetIdentityFeedbackForwardingEnabled":{ "name":"SetIdentityFeedbackForwardingEnabled", @@ -590,7 +745,7 @@ "shape":"SetIdentityFeedbackForwardingEnabledResponse", "resultWrapper":"SetIdentityFeedbackForwardingEnabledResult" }, - "documentation":"

Given an identity (an email address or a domain), enables or disables whether Amazon SES forwards bounce and complaint notifications as email. Feedback forwarding can only be disabled when Amazon Simple Notification Service (Amazon SNS) topics are specified for both bounces and complaints.

Feedback forwarding does not apply to delivery notifications. Delivery notifications are only available through Amazon SNS.

This action is throttled at one request per second.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" + "documentation":"

Given an identity (an email address or a domain), enables or disables whether Amazon SES forwards bounce and complaint notifications as email. Feedback forwarding can only be disabled when Amazon Simple Notification Service (Amazon SNS) topics are specified for both bounces and complaints.

Feedback forwarding does not apply to delivery notifications. Delivery notifications are only available through Amazon SNS.

You can execute this operation no more than once per second.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" }, "SetIdentityHeadersInNotificationsEnabled":{ "name":"SetIdentityHeadersInNotificationsEnabled", @@ -603,7 +758,7 @@ "shape":"SetIdentityHeadersInNotificationsEnabledResponse", "resultWrapper":"SetIdentityHeadersInNotificationsEnabledResult" }, - "documentation":"

Given an identity (an email address or a domain), sets whether Amazon SES includes the original email headers in the Amazon Simple Notification Service (Amazon SNS) notifications of a specified type.

This action is throttled at one request per second.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" + "documentation":"

Given an identity (an email address or a domain), sets whether Amazon SES includes the original email headers in the Amazon Simple Notification Service (Amazon SNS) notifications of a specified type.

You can execute this operation no more than once per second.

For more information about using notifications with Amazon SES, see the Amazon SES Developer Guide.

" }, "SetIdentityMailFromDomain":{ "name":"SetIdentityMailFromDomain", @@ -616,7 +771,7 @@ "shape":"SetIdentityMailFromDomainResponse", "resultWrapper":"SetIdentityMailFromDomainResult" }, - "documentation":"

Enables or disables the custom MAIL FROM domain setup for a verified identity (an email address or a domain).

To send emails using the specified MAIL FROM domain, you must add an MX record to your MAIL FROM domain's DNS settings. If you want your emails to pass Sender Policy Framework (SPF) checks, you must also add or update an SPF record. For more information, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Enables or disables the custom MAIL FROM domain setup for a verified identity (an email address or a domain).

To send emails using the specified MAIL FROM domain, you must add an MX record to your MAIL FROM domain's DNS settings. If you want your emails to pass Sender Policy Framework (SPF) checks, you must also add or update an SPF record. For more information, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "SetIdentityNotificationTopic":{ "name":"SetIdentityNotificationTopic", @@ -629,7 +784,7 @@ "shape":"SetIdentityNotificationTopicResponse", "resultWrapper":"SetIdentityNotificationTopicResult" }, - "documentation":"

Given an identity (an email address or a domain), sets the Amazon Simple Notification Service (Amazon SNS) topic to which Amazon SES will publish bounce, complaint, and/or delivery notifications for emails sent with that identity as the Source.

Unless feedback forwarding is enabled, you must specify Amazon SNS topics for bounce and complaint notifications. For more information, see SetIdentityFeedbackForwardingEnabled.

This action is throttled at one request per second.

For more information about feedback notification, see the Amazon SES Developer Guide.

" + "documentation":"

Given an identity (an email address or a domain), sets the Amazon Simple Notification Service (Amazon SNS) topic to which Amazon SES will publish bounce, complaint, and/or delivery notifications for emails sent with that identity as the Source.

Unless feedback forwarding is enabled, you must specify Amazon SNS topics for bounce and complaint notifications. For more information, see SetIdentityFeedbackForwardingEnabled.

You can execute this operation no more than once per second.

For more information about feedback notification, see the Amazon SES Developer Guide.

" }, "SetReceiptRulePosition":{ "name":"SetReceiptRulePosition", @@ -646,7 +801,34 @@ {"shape":"RuleSetDoesNotExistException"}, {"shape":"RuleDoesNotExistException"} ], - "documentation":"

Sets the position of the specified receipt rule in the receipt rule set.

For information about managing receipt rules, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Sets the position of the specified receipt rule in the receipt rule set.

For information about managing receipt rules, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "TestRenderTemplate":{ + "name":"TestRenderTemplate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TestRenderTemplateRequest"}, + "output":{ + "shape":"TestRenderTemplateResponse", + "resultWrapper":"TestRenderTemplateResult" + }, + "errors":[ + {"shape":"TemplateDoesNotExistException"}, + {"shape":"InvalidRenderingParameterException"}, + {"shape":"MissingRenderingAttributeException"} + ], + "documentation":"

Creates a preview of the MIME content of an email when provided with a template and a set of replacement data.

You can execute this operation no more than once per second.

" + }, + "UpdateAccountSendingEnabled":{ + "name":"UpdateAccountSendingEnabled", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateAccountSendingEnabledRequest"}, + "documentation":"

Enables or disables email sending across your entire Amazon SES account. You can use this operation in conjunction with Amazon CloudWatch alarms to temporarily pause email sending across your Amazon SES account when reputation metrics (such as your bounce on complaint rate) reach certain thresholds.

You can execute this operation no more than once per second.

" }, "UpdateConfigurationSetEventDestination":{ "name":"UpdateConfigurationSetEventDestination", @@ -663,9 +845,52 @@ {"shape":"ConfigurationSetDoesNotExistException"}, {"shape":"EventDestinationDoesNotExistException"}, {"shape":"InvalidCloudWatchDestinationException"}, - {"shape":"InvalidFirehoseDestinationException"} + {"shape":"InvalidFirehoseDestinationException"}, + {"shape":"InvalidSNSDestinationException"} + ], + "documentation":"

Updates the event destination of a configuration set. Event destinations are associated with configuration sets, which enable you to publish email sending events to Amazon CloudWatch, Amazon Kinesis Firehose, or Amazon Simple Notification Service (Amazon SNS). For information about using configuration sets, see Monitoring Your Amazon SES Sending Activity in the Amazon SES Developer Guide.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be Amazon CloudWatch, Amazon Kinesis Firehose, or Amazon Simple Notification Service (Amazon SNS).

You can execute this operation no more than once per second.

" + }, + "UpdateConfigurationSetReputationMetricsEnabled":{ + "name":"UpdateConfigurationSetReputationMetricsEnabled", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateConfigurationSetReputationMetricsEnabledRequest"}, + "errors":[ + {"shape":"ConfigurationSetDoesNotExistException"} + ], + "documentation":"

Enables or disables the publishing of reputation metrics for emails sent using a specific configuration set. Reputation metrics include bounce and complaint rates. These metrics are published to Amazon CloudWatch. By using Amazon CloudWatch, you can create alarms when bounce or complaint rates exceed a certain threshold.

You can execute this operation no more than once per second.

" + }, + "UpdateConfigurationSetSendingEnabled":{ + "name":"UpdateConfigurationSetSendingEnabled", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateConfigurationSetSendingEnabledRequest"}, + "errors":[ + {"shape":"ConfigurationSetDoesNotExistException"} + ], + "documentation":"

Enables or disables email sending for messages sent using a specific configuration set. You can use this operation in conjunction with Amazon CloudWatch alarms to temporarily pause email sending for a configuration set when the reputation metrics for that configuration set (such as your bounce on complaint rate) reach certain thresholds.

You can execute this operation no more than once per second.

" + }, + "UpdateConfigurationSetTrackingOptions":{ + "name":"UpdateConfigurationSetTrackingOptions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateConfigurationSetTrackingOptionsRequest"}, + "output":{ + "shape":"UpdateConfigurationSetTrackingOptionsResponse", + "resultWrapper":"UpdateConfigurationSetTrackingOptionsResult" + }, + "errors":[ + {"shape":"ConfigurationSetDoesNotExistException"}, + {"shape":"TrackingOptionsDoesNotExistException"}, + {"shape":"InvalidTrackingOptionsException"} ], - "documentation":"

Updates the event destination of a configuration set.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be either Amazon CloudWatch or Amazon Kinesis Firehose.

Event destinations are associated with configuration sets, which enable you to publish email sending events to Amazon CloudWatch or Amazon Kinesis Firehose. For information about using configuration sets, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Modifies an association between a configuration set and a custom domain for open and click event tracking.

By default, images and links used for tracking open and click events are hosted on domains operated by Amazon SES. You can configure a subdomain of your own to handle these events. For information about using configuration sets, see Configuring Custom Domains to Handle Open and Click Tracking in the Amazon SES Developer Guide.

" }, "UpdateReceiptRule":{ "name":"UpdateReceiptRule", @@ -686,7 +911,24 @@ {"shape":"RuleDoesNotExistException"}, {"shape":"LimitExceededException"} ], - "documentation":"

Updates a receipt rule.

For information about managing receipt rules, see the Amazon SES Developer Guide.

This action is throttled at one request per second.

" + "documentation":"

Updates a receipt rule.

For information about managing receipt rules, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" + }, + "UpdateTemplate":{ + "name":"UpdateTemplate", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTemplateRequest"}, + "output":{ + "shape":"UpdateTemplateResponse", + "resultWrapper":"UpdateTemplateResult" + }, + "errors":[ + {"shape":"TemplateDoesNotExistException"}, + {"shape":"InvalidTemplateException"} + ], + "documentation":"

Updates an email template. Email templates enable you to send personalized email to one or more destinations in a single API operation. For more information, see the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "VerifyDomainDkim":{ "name":"VerifyDomainDkim", @@ -699,7 +941,7 @@ "shape":"VerifyDomainDkimResponse", "resultWrapper":"VerifyDomainDkimResult" }, - "documentation":"

Returns a set of DKIM tokens for a domain. DKIM tokens are character strings that represent your domain's identity. Using these tokens, you will need to create DNS CNAME records that point to DKIM public keys hosted by Amazon SES. Amazon Web Services will eventually detect that you have updated your DNS records; this detection process may take up to 72 hours. Upon successful detection, Amazon SES will be able to DKIM-sign email originating from that domain.

This action is throttled at one request per second.

To enable or disable Easy DKIM signing for a domain, use the SetIdentityDkimEnabled action.

For more information about creating DNS records using DKIM tokens, go to the Amazon SES Developer Guide.

" + "documentation":"

Returns a set of DKIM tokens for a domain. DKIM tokens are character strings that represent your domain's identity. Using these tokens, you will need to create DNS CNAME records that point to DKIM public keys hosted by Amazon SES. Amazon Web Services will eventually detect that you have updated your DNS records; this detection process may take up to 72 hours. Upon successful detection, Amazon SES will be able to DKIM-sign email originating from that domain.

You can execute this operation no more than once per second.

To enable or disable Easy DKIM signing for a domain, use the SetIdentityDkimEnabled operation.

For more information about creating DNS records using DKIM tokens, go to the Amazon SES Developer Guide.

" }, "VerifyDomainIdentity":{ "name":"VerifyDomainIdentity", @@ -712,7 +954,7 @@ "shape":"VerifyDomainIdentityResponse", "resultWrapper":"VerifyDomainIdentityResult" }, - "documentation":"

Verifies a domain.

This action is throttled at one request per second.

" + "documentation":"

Adds a domain to the list of identities for your Amazon SES account and attempts to verify it. For more information about verifying domains, see Verifying Email Addresses and Domains in the Amazon SES Developer Guide.

You can execute this operation no more than once per second.

" }, "VerifyEmailAddress":{ "name":"VerifyEmailAddress", @@ -721,7 +963,7 @@ "requestUri":"/" }, "input":{"shape":"VerifyEmailAddressRequest"}, - "documentation":"

Verifies an email address. This action causes a confirmation email message to be sent to the specified address.

The VerifyEmailAddress action is deprecated as of the May 15, 2012 release of Domain Verification. The VerifyEmailIdentity action is now preferred.

This action is throttled at one request per second.

" + "documentation":"

Deprecated. Use the VerifyEmailIdentity operation to verify a new email address.

" }, "VerifyEmailIdentity":{ "name":"VerifyEmailIdentity", @@ -734,10 +976,22 @@ "shape":"VerifyEmailIdentityResponse", "resultWrapper":"VerifyEmailIdentityResult" }, - "documentation":"

Verifies an email address. This action causes a confirmation email message to be sent to the specified address.

This action is throttled at one request per second.

" + "documentation":"

Adds an email address to the list of identities for your Amazon SES account and attempts to verify it. This operation causes a confirmation email message to be sent to the specified address.

You can execute this operation no more than once per second.

" } }, "shapes":{ + "AccountSendingPausedException":{ + "type":"structure", + "members":{ + }, + "documentation":"

Indicates that email sending is disabled for your entire Amazon SES account.

You can enable or disable email sending for your Amazon SES account using UpdateAccountSendingEnabled.

", + "error":{ + "code":"AccountSendingPausedException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "AddHeaderAction":{ "type":"structure", "required":[ @@ -764,7 +1018,10 @@ "AlreadyExistsException":{ "type":"structure", "members":{ - "Name":{"shape":"RuleOrRuleSetName"} + "Name":{ + "shape":"RuleOrRuleSetName", + "documentation":"

Indicates that a resource could not be created because the resource name already exists.

" + } }, "documentation":"

Indicates that a resource could not be created because of a naming conflict.

", "error":{ @@ -869,10 +1126,74 @@ "type":"list", "member":{"shape":"BouncedRecipientInfo"} }, + "BulkEmailDestination":{ + "type":"structure", + "required":["Destination"], + "members":{ + "Destination":{"shape":"Destination"}, + "ReplacementTags":{ + "shape":"MessageTagList", + "documentation":"

A list of tags, in the form of name/value pairs, to apply to an email that you send using SendBulkTemplatedEmail. Tags correspond to characteristics of the email that you define, so that you can publish email sending events.

" + }, + "ReplacementTemplateData":{ + "shape":"TemplateData", + "documentation":"

A list of replacement values to apply to the template. This parameter is a JSON object, typically consisting of key-value pairs in which the keys correspond to replacement tags in the email template.

" + } + }, + "documentation":"

An array that contains one or more Destinations, as well as the tags and replacement data associated with each of those Destinations.

" + }, + "BulkEmailDestinationList":{ + "type":"list", + "member":{"shape":"BulkEmailDestination"} + }, + "BulkEmailDestinationStatus":{ + "type":"structure", + "members":{ + "Status":{ + "shape":"BulkEmailStatus", + "documentation":"

The status of a message sent using the SendBulkTemplatedEmail operation.

Possible values for this parameter include:

" + }, + "Error":{ + "shape":"Error", + "documentation":"

A description of an error that prevented a message being sent using the SendBulkTemplatedEmail operation.

" + }, + "MessageId":{ + "shape":"MessageId", + "documentation":"

The unique message identifier returned from the SendBulkTemplatedEmail operation.

" + } + }, + "documentation":"

An object that contains the response from the SendBulkTemplatedEmail operation.

" + }, + "BulkEmailDestinationStatusList":{ + "type":"list", + "member":{"shape":"BulkEmailDestinationStatus"} + }, + "BulkEmailStatus":{ + "type":"string", + "enum":[ + "Success", + "MessageRejected", + "MailFromDomainNotVerified", + "ConfigurationSetDoesNotExist", + "TemplateDoesNotExist", + "AccountSuspended", + "AccountThrottled", + "AccountDailyQuotaExceeded", + "InvalidSendingPoolName", + "AccountSendingPaused", + "ConfigurationSetSendingPaused", + "InvalidParameterValue", + "TransientFailure", + "Failed" + ] + }, "CannotDeleteException":{ "type":"structure", "members":{ - "Name":{"shape":"RuleOrRuleSetName"} + "Name":{ + "shape":"RuleOrRuleSetName", + "documentation":"

Indicates that a resource could not be deleted because no resource with the specified name exists.

" + } }, "documentation":"

Indicates that the delete operation could not be completed.

", "error":{ @@ -952,15 +1273,18 @@ "members":{ "Name":{ "shape":"ConfigurationSetName", - "documentation":"

The name of the configuration set. The name must:

" + "documentation":"

The name of the configuration set. The name must meet the following requirements:

" } }, - "documentation":"

The name of the configuration set.

Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

" + "documentation":"

The name of the configuration set.

Configuration sets let you create groups of rules that you can apply to the emails you send using Amazon SES. For more information about using configuration sets, see Using Amazon SES Configuration Sets in the Amazon SES Developer Guide.

" }, "ConfigurationSetAlreadyExistsException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"} + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + } }, "documentation":"

Indicates that the configuration set could not be created because of a naming conflict.

", "error":{ @@ -972,7 +1296,11 @@ }, "ConfigurationSetAttribute":{ "type":"string", - "enum":["eventDestinations"] + "enum":[ + "eventDestinations", + "trackingOptions", + "reputationOptions" + ] }, "ConfigurationSetAttributeList":{ "type":"list", @@ -981,7 +1309,10 @@ "ConfigurationSetDoesNotExistException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"} + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + } }, "documentation":"

Indicates that the configuration set does not exist.

", "error":{ @@ -992,6 +1323,22 @@ "exception":true }, "ConfigurationSetName":{"type":"string"}, + "ConfigurationSetSendingPausedException":{ + "type":"structure", + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set for which email sending is disabled.

" + } + }, + "documentation":"

Indicates that email sending is disabled for the configuration set.

You can enable or disable email sending for a configuration set using UpdateConfigurationSetSendingEnabled.

", + "error":{ + "code":"ConfigurationSetSendingPausedException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "ConfigurationSets":{ "type":"list", "member":{"shape":"ConfigurationSet"} @@ -1021,11 +1368,11 @@ "members":{ "ConfigurationSetName":{ "shape":"ConfigurationSetName", - "documentation":"

The name of the configuration set to which to apply the event destination.

" + "documentation":"

The name of the configuration set that the event destination should be associated with.

" }, "EventDestination":{ "shape":"EventDestination", - "documentation":"

An object that describes the AWS service to which Amazon SES will publish the email sending events associated with the specified configuration set.

" + "documentation":"

An object that describes the AWS service that email sending event information will be published to.

" } }, "documentation":"

Represents a request to create a configuration set event destination. A configuration set event destination, which can be either Amazon CloudWatch or Amazon Kinesis Firehose, describes an AWS service in which Amazon SES publishes the email sending events associated with a configuration set. For information about using configuration sets, see the Amazon SES Developer Guide.

" @@ -1053,6 +1400,27 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "CreateConfigurationSetTrackingOptionsRequest":{ + "type":"structure", + "required":[ + "ConfigurationSetName", + "TrackingOptions" + ], + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set that the tracking options should be associated with.

" + }, + "TrackingOptions":{"shape":"TrackingOptions"} + }, + "documentation":"

Represents a request to create an open and click tracking option object in a configuration set.

" + }, + "CreateConfigurationSetTrackingOptionsResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

An empty element returned on a successful request.

" + }, "CreateReceiptFilterRequest":{ "type":"structure", "required":["Filter"], @@ -1079,7 +1447,7 @@ "members":{ "RuleSetName":{ "shape":"ReceiptRuleSetName", - "documentation":"

The name of the rule set to which to add the rule.

" + "documentation":"

The name of the rule set that the receipt rule will be added to.

" }, "After":{ "shape":"ReceiptRuleName", @@ -1115,6 +1483,22 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "CreateTemplateRequest":{ + "type":"structure", + "required":["Template"], + "members":{ + "Template":{ + "shape":"Template", + "documentation":"

The content of the email, composed of a subject line, an HTML part, and a text-only part.

" + } + }, + "documentation":"

Represents a request to create an email template. For more information, see the Amazon SES Developer Guide.

" + }, + "CreateTemplateResponse":{ + "type":"structure", + "members":{ + } + }, "CustomMailFromStatus":{ "type":"string", "enum":[ @@ -1124,6 +1508,7 @@ "TemporaryFailure" ] }, + "CustomRedirectDomain":{"type":"string"}, "DefaultDimensionValue":{"type":"string"}, "DeleteConfigurationSetEventDestinationRequest":{ "type":"structure", @@ -1166,6 +1551,23 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "DeleteConfigurationSetTrackingOptionsRequest":{ + "type":"structure", + "required":["ConfigurationSetName"], + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set from which you want to delete the tracking options.

" + } + }, + "documentation":"

Represents a request to delete open and click tracking options in a configuration set.

" + }, + "DeleteConfigurationSetTrackingOptionsResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

An empty element returned on a successful request.

" + }, "DeleteIdentityPolicyRequest":{ "type":"structure", "required":[ @@ -1265,6 +1667,22 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "DeleteTemplateRequest":{ + "type":"structure", + "required":["TemplateName"], + "members":{ + "TemplateName":{ + "shape":"TemplateName", + "documentation":"

The name of the template to be deleted.

" + } + }, + "documentation":"

Represents a request to delete an email template. For more information, see the Amazon SES Developer Guide.

" + }, + "DeleteTemplateResponse":{ + "type":"structure", + "members":{ + } + }, "DeleteVerifiedEmailAddressRequest":{ "type":"structure", "required":["EmailAddress"], @@ -1321,6 +1739,14 @@ "EventDestinations":{ "shape":"EventDestinations", "documentation":"

A list of event destinations associated with the configuration set.

" + }, + "TrackingOptions":{ + "shape":"TrackingOptions", + "documentation":"

The name of the custom open and click tracking domain associated with the configuration set.

" + }, + "ReputationOptions":{ + "shape":"ReputationOptions", + "documentation":"

An object that represents the reputation settings for the configuration set.

" } }, "documentation":"

Represents the details of a configuration set. Configuration sets enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

" @@ -1334,7 +1760,7 @@ "members":{ "RuleSetName":{ "shape":"ReceiptRuleSetName", - "documentation":"

The name of the receipt rule set to which the receipt rule belongs.

" + "documentation":"

The name of the receipt rule set that the receipt rule belongs to.

" }, "RuleName":{ "shape":"ReceiptRuleName", @@ -1394,7 +1820,7 @@ "documentation":"

The BCC: field(s) of the message.

" } }, - "documentation":"

Represents the destination of the message, consisting of To:, CC:, and BCC: fields.

By default, the string must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" + "documentation":"

Represents the destination of the message, consisting of To:, CC:, and BCC: fields.

By default, the string must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" }, "DiagnosticCode":{"type":"string"}, "DimensionName":{"type":"string"}, @@ -1402,7 +1828,8 @@ "type":"string", "enum":[ "messageTag", - "emailHeader" + "emailHeader", + "linkTag" ] }, "DkimAttributes":{ @@ -1423,6 +1850,7 @@ }, "DsnStatus":{"type":"string"}, "Enabled":{"type":"boolean"}, + "Error":{"type":"string"}, "EventDestination":{ "type":"structure", "required":[ @@ -1449,15 +1877,25 @@ "CloudWatchDestination":{ "shape":"CloudWatchDestination", "documentation":"

An object that contains the names, default values, and sources of the dimensions associated with an Amazon CloudWatch event destination.

" + }, + "SNSDestination":{ + "shape":"SNSDestination", + "documentation":"

An object that contains the topic ARN associated with an Amazon Simple Notification Service (Amazon SNS) event destination.

" } }, - "documentation":"

Contains information about the event destination to which the specified email sending events are published.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be either Amazon CloudWatch or Amazon Kinesis Firehose.

Event destinations are associated with configuration sets, which enable you to publish email sending events to Amazon CloudWatch or Amazon Kinesis Firehose. For information about using configuration sets, see the Amazon SES Developer Guide.

" + "documentation":"

Contains information about the event destination that the specified email sending events will be published to.

When you create or update an event destination, you must provide one, and only one, destination. The destination can be Amazon CloudWatch, Amazon Kinesis Firehose or Amazon Simple Notification Service (Amazon SNS).

Event destinations are associated with configuration sets, which enable you to publish email sending events to Amazon CloudWatch, Amazon Kinesis Firehose, or Amazon Simple Notification Service (Amazon SNS). For information about using configuration sets, see the Amazon SES Developer Guide.

" }, "EventDestinationAlreadyExistsException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"}, - "EventDestinationName":{"shape":"EventDestinationName"} + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + }, + "EventDestinationName":{ + "shape":"EventDestinationName", + "documentation":"

Indicates that the event destination does not exist.

" + } }, "documentation":"

Indicates that the event destination could not be created because of a naming conflict.

", "error":{ @@ -1470,11 +1908,17 @@ "EventDestinationDoesNotExistException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"}, - "EventDestinationName":{"shape":"EventDestinationName"} - }, - "documentation":"

Indicates that the event destination does not exist.

", - "error":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + }, + "EventDestinationName":{ + "shape":"EventDestinationName", + "documentation":"

Indicates that the event destination does not exist.

" + } + }, + "documentation":"

Indicates that the event destination does not exist.

", + "error":{ "code":"EventDestinationDoesNotExist", "httpStatusCode":400, "senderFault":true @@ -1493,7 +1937,10 @@ "reject", "bounce", "complaint", - "delivery" + "delivery", + "open", + "click", + "renderingFailure" ] }, "EventTypes":{ @@ -1525,6 +1972,16 @@ }, "ExtensionFieldName":{"type":"string"}, "ExtensionFieldValue":{"type":"string"}, + "GetAccountSendingEnabledResponse":{ + "type":"structure", + "members":{ + "Enabled":{ + "shape":"Enabled", + "documentation":"

Describes whether email sending is enabled or disabled for your Amazon SES account.

" + } + }, + "documentation":"

Represents a request to return the email sending status for your Amazon SES account.

" + }, "GetIdentityDkimAttributesRequest":{ "type":"structure", "required":["Identities"], @@ -1670,8 +2127,25 @@ }, "documentation":"

Represents a list of data points. This list contains aggregated data from the previous two weeks of your sending activity with Amazon SES.

" }, + "GetTemplateRequest":{ + "type":"structure", + "required":["TemplateName"], + "members":{ + "TemplateName":{ + "shape":"TemplateName", + "documentation":"

The name of the template you want to retrieve.

" + } + } + }, + "GetTemplateResponse":{ + "type":"structure", + "members":{ + "Template":{"shape":"Template"} + } + }, "HeaderName":{"type":"string"}, "HeaderValue":{"type":"string"}, + "HtmlPart":{"type":"string"}, "Identity":{"type":"string"}, "IdentityDkimAttributes":{ "type":"structure", @@ -1787,8 +2261,14 @@ "InvalidCloudWatchDestinationException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"}, - "EventDestinationName":{"shape":"EventDestinationName"} + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + }, + "EventDestinationName":{ + "shape":"EventDestinationName", + "documentation":"

Indicates that the event destination does not exist.

" + } }, "documentation":"

Indicates that the Amazon CloudWatch destination is invalid. See the error message for details.

", "error":{ @@ -1813,8 +2293,14 @@ "InvalidFirehoseDestinationException":{ "type":"structure", "members":{ - "ConfigurationSetName":{"shape":"ConfigurationSetName"}, - "EventDestinationName":{"shape":"EventDestinationName"} + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + }, + "EventDestinationName":{ + "shape":"EventDestinationName", + "documentation":"

Indicates that the event destination does not exist.

" + } }, "documentation":"

Indicates that the Amazon Kinesis Firehose destination is invalid. See the error message for details.

", "error":{ @@ -1827,7 +2313,10 @@ "InvalidLambdaFunctionException":{ "type":"structure", "members":{ - "FunctionArn":{"shape":"AmazonResourceName"} + "FunctionArn":{ + "shape":"AmazonResourceName", + "documentation":"

Indicates that the ARN of the function was not found.

" + } }, "documentation":"

Indicates that the provided AWS Lambda function is invalid, or that Amazon SES could not execute the provided function, possibly due to permissions issues. For information about giving permissions, see the Amazon SES Developer Guide.

", "error":{ @@ -1849,10 +2338,26 @@ }, "exception":true }, + "InvalidRenderingParameterException":{ + "type":"structure", + "members":{ + "TemplateName":{"shape":"TemplateName"} + }, + "documentation":"

Indicates that one or more of the replacement values you provided is invalid. This error may occur when the TemplateData object contains invalid JSON.

", + "error":{ + "code":"InvalidRenderingParameter", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "InvalidS3ConfigurationException":{ "type":"structure", "members":{ - "Bucket":{"shape":"S3BucketName"} + "Bucket":{ + "shape":"S3BucketName", + "documentation":"

Indicated that the S3 Bucket was not found.

" + } }, "documentation":"

Indicates that the provided Amazon S3 bucket or AWS KMS encryption key is invalid, or that Amazon SES could not publish to the bucket, possibly due to permissions issues. For information about giving permissions, see the Amazon SES Developer Guide.

", "error":{ @@ -1862,10 +2367,33 @@ }, "exception":true }, + "InvalidSNSDestinationException":{ + "type":"structure", + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that the configuration set does not exist.

" + }, + "EventDestinationName":{ + "shape":"EventDestinationName", + "documentation":"

Indicates that the event destination does not exist.

" + } + }, + "documentation":"

Indicates that the Amazon Simple Notification Service (Amazon SNS) destination is invalid. See the error message for details.

", + "error":{ + "code":"InvalidSNSDestination", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "InvalidSnsTopicException":{ "type":"structure", "members":{ - "Topic":{"shape":"AmazonResourceName"} + "Topic":{ + "shape":"AmazonResourceName", + "documentation":"

Indicates that the topic does not exist.

" + } }, "documentation":"

Indicates that the provided Amazon SNS topic is invalid, or that Amazon SES could not publish to the topic, possibly due to permissions issues. For information about giving permissions, see the Amazon SES Developer Guide.

", "error":{ @@ -1875,6 +2403,31 @@ }, "exception":true }, + "InvalidTemplateException":{ + "type":"structure", + "members":{ + "TemplateName":{"shape":"TemplateName"} + }, + "documentation":"

Indicates that a template could not be created because it contained invalid JSON.

", + "error":{ + "code":"InvalidTemplate", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "InvalidTrackingOptionsException":{ + "type":"structure", + "members":{ + }, + "documentation":"

Indicates that the custom domain to be used for open and click tracking redirects is invalid. This error appears most often in the following situations:

", + "error":{ + "code":"InvalidTrackingOptions", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "InvocationType":{ "type":"string", "enum":[ @@ -1895,7 +2448,7 @@ }, "DeliveryStreamARN":{ "shape":"AmazonResourceName", - "documentation":"

The ARN of the Amazon Kinesis Firehose stream to which to publish email sending events.

" + "documentation":"

The ARN of the Amazon Kinesis Firehose stream that email sending events should be published to.

" } }, "documentation":"

Contains the delivery stream ARN and the IAM role ARN associated with an Amazon Kinesis Firehose event destination.

Event destinations, such as Amazon Kinesis Firehose, are associated with configuration sets, which enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

" @@ -1920,6 +2473,7 @@ "documentation":"

When included in a receipt rule, this action calls an AWS Lambda function and, optionally, publishes a notification to Amazon Simple Notification Service (Amazon SNS).

To enable Amazon SES to call your AWS Lambda function or to publish to an Amazon SNS topic of another account, Amazon SES must have permission to access those resources. For information about giving permissions, see the Amazon SES Developer Guide.

For information about using AWS Lambda actions in receipt rules, see the Amazon SES Developer Guide.

" }, "LastAttemptDate":{"type":"timestamp"}, + "LastFreshStart":{"type":"timestamp"}, "LimitExceededException":{ "type":"structure", "members":{ @@ -2055,6 +2609,32 @@ }, "documentation":"

A list of receipt rule sets that exist under your AWS account.

" }, + "ListTemplatesRequest":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token to use for pagination.

" + }, + "MaxItems":{ + "shape":"MaxItems", + "documentation":"

The maximum number of templates to return. This value must be at least 1 and less than or equal to 10. If you do not specify a value, or if you specify a value less than 1 or greater than 10, the operation will return up to 10 results.

" + } + } + }, + "ListTemplatesResponse":{ + "type":"structure", + "members":{ + "TemplatesMetadata":{ + "shape":"TemplateMetadataList", + "documentation":"

An array the contains the name of creation time stamp for each template in your Amazon SES account.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token to use for pagination.

" + } + } + }, "ListVerifiedEmailAddressesResponse":{ "type":"structure", "members":{ @@ -2161,6 +2741,19 @@ }, "MessageTagName":{"type":"string"}, "MessageTagValue":{"type":"string"}, + "MissingRenderingAttributeException":{ + "type":"structure", + "members":{ + "TemplateName":{"shape":"TemplateName"} + }, + "documentation":"

Indicates that one or more of the replacement values for the specified template was not specified. Ensure that the TemplateData object contains references to all of the replacement tags in the specified template.

", + "error":{ + "code":"MissingRenderingAttribute", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "NextToken":{"type":"string"}, "NotificationAttributes":{ "type":"map", @@ -2204,7 +2797,7 @@ "members":{ "Identity":{ "shape":"Identity", - "documentation":"

The identity to which the policy will apply. You can specify an identity by using its name or by using its Amazon Resource Name (ARN). Examples: user@example.com, example.com, arn:aws:ses:us-east-1:123456789012:identity/example.com.

To successfully call this API, you must own the identity.

" + "documentation":"

The identity that the policy will apply to. You can specify an identity by using its name or by using its Amazon Resource Name (ARN). Examples: user@example.com, example.com, arn:aws:ses:us-east-1:123456789012:identity/example.com.

To successfully call this API, you must own the identity.

" }, "PolicyName":{ "shape":"PolicyName", @@ -2229,7 +2822,7 @@ "members":{ "Data":{ "shape":"RawMessageData", - "documentation":"

The raw data of the message. The client must ensure that the message format complies with Internet email standards regarding email header fields, MIME types, MIME encoding, and base64 encoding.

The To:, CC:, and BCC: headers in the raw message can contain a group list.

If you are using SendRawEmail with sending authorization, you can include X-headers in the raw message to specify the \"Source,\" \"From,\" and \"Return-Path\" addresses. For more information, see the documentation for SendRawEmail.

Do not include these X-headers in the DKIM signature, because they are removed by Amazon SES before sending the email.

For more information, go to the Amazon SES Developer Guide.

" + "documentation":"

The raw data of the message. This data needs to base64-encoded if you are accessing Amazon SES directly through the HTTPS interface. If you are accessing Amazon SES using an AWS SDK, the SDK takes care of the base 64-encoding for you. In all cases, the client must ensure that the message format complies with Internet email standards regarding email header fields, MIME types, and MIME encoding.

The To:, CC:, and BCC: headers in the raw message can contain a group list.

If you are using SendRawEmail with sending authorization, you can include X-headers in the raw message to specify the \"Source,\" \"From,\" and \"Return-Path\" addresses. For more information, see the documentation for SendRawEmail.

Do not include these X-headers in the DKIM signature, because they are removed by Amazon SES before sending the email.

For more information, go to the Amazon SES Developer Guide.

" } }, "documentation":"

Represents the raw data of the message.

" @@ -2339,7 +2932,7 @@ }, "Recipients":{ "shape":"RecipientsList", - "documentation":"

The recipient domains and email addresses to which the receipt rule applies. If this field is not specified, this rule will match all recipients under all verified domains.

" + "documentation":"

The recipient domains and email addresses that the receipt rule applies to. If this field is not specified, this rule will match all recipients under all verified domains.

" }, "Actions":{ "shape":"ReceiptActionsList", @@ -2347,10 +2940,10 @@ }, "ScanEnabled":{ "shape":"Enabled", - "documentation":"

If true, then messages to which this receipt rule applies are scanned for spam and viruses. The default value is false.

" + "documentation":"

If true, then messages that this receipt rule applies to are scanned for spam and viruses. The default value is false.

" } }, - "documentation":"

Receipt rules enable you to specify which actions Amazon SES should take when it receives mail on behalf of one or more email addresses or domains that you own.

Each receipt rule defines a set of email addresses or domains to which it applies. If the email addresses or domains match at least one recipient address of the message, Amazon SES executes all of the receipt rule's actions on the message.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

" + "documentation":"

Receipt rules enable you to specify which actions Amazon SES should take when it receives mail on behalf of one or more email addresses or domains that you own.

Each receipt rule defines a set of email addresses or domains that it applies to. If the email addresses or domains match at least one recipient address of the message, Amazon SES executes all of the receipt rule's actions on the message.

For information about setting up receipt rules, see the Amazon SES Developer Guide.

" }, "ReceiptRuleName":{"type":"string"}, "ReceiptRuleNamesList":{ @@ -2390,7 +2983,7 @@ "members":{ "FinalRecipient":{ "shape":"Address", - "documentation":"

The email address to which the message was ultimately delivered. This corresponds to the Final-Recipient in the DSN. If not specified, FinalRecipient will be set to the Recipient specified in the BouncedRecipientInfo structure. Either FinalRecipient or the recipient in BouncedRecipientInfo must be a recipient of the original bounced message.

Do not prepend the FinalRecipient email address with rfc 822;, as described in RFC 3798.

" + "documentation":"

The email address that the message was ultimately delivered to. This corresponds to the Final-Recipient in the DSN. If not specified, FinalRecipient will be set to the Recipient specified in the BouncedRecipientInfo structure. Either FinalRecipient or the recipient in BouncedRecipientInfo must be a recipient of the original bounced message.

Do not prepend the FinalRecipient email address with rfc 822;, as described in RFC 3798.

" }, "Action":{ "shape":"DsnAction", @@ -2424,6 +3017,7 @@ "member":{"shape":"Recipient"} }, "RemoteMta":{"type":"string"}, + "RenderedTemplate":{"type":"string"}, "ReorderReceiptRuleSetRequest":{ "type":"structure", "required":[ @@ -2449,10 +3043,31 @@ "documentation":"

An empty element returned on a successful request.

" }, "ReportingMta":{"type":"string"}, + "ReputationOptions":{ + "type":"structure", + "members":{ + "SendingEnabled":{ + "shape":"Enabled", + "documentation":"

Describes whether email sending is enabled or disabled for the configuration set. If the value is true, then Amazon SES will send emails that use the configuration set. If the value is false, Amazon SES will not send emails that use the configuration set. The default value is true. You can change this setting using UpdateConfigurationSetSendingEnabled.

" + }, + "ReputationMetricsEnabled":{ + "shape":"Enabled", + "documentation":"

Describes whether or not Amazon SES publishes reputation metrics for the configuration set, such as bounce and complaint rates, to Amazon CloudWatch.

If the value is true, reputation metrics are published. If the value is false, reputation metrics are not published. The default value is false.

" + }, + "LastFreshStart":{ + "shape":"LastFreshStart", + "documentation":"

The date and time at which the reputation metrics for the configuration set were last reset. Resetting these metrics is known as a fresh start.

When you disable email sending for a configuration set using UpdateConfigurationSetSendingEnabled and later re-enable it, the reputation metrics for the configuration set (but not for the entire Amazon SES account) are reset.

If email sending for the configuration set has never been disabled and later re-enabled, the value of this attribute is null.

" + } + }, + "documentation":"

Contains information about the reputation settings for a configuration set.

" + }, "RuleDoesNotExistException":{ "type":"structure", "members":{ - "Name":{"shape":"RuleOrRuleSetName"} + "Name":{ + "shape":"RuleOrRuleSetName", + "documentation":"

Indicates that the named receipt rule does not exist.

" + } }, "documentation":"

Indicates that the provided receipt rule does not exist.

", "error":{ @@ -2466,7 +3081,10 @@ "RuleSetDoesNotExistException":{ "type":"structure", "members":{ - "Name":{"shape":"RuleOrRuleSetName"} + "Name":{ + "shape":"RuleOrRuleSetName", + "documentation":"

Indicates that the named receipt rule set does not exist.

" + } }, "documentation":"

Indicates that the provided receipt rule set does not exist.

", "error":{ @@ -2486,7 +3104,7 @@ }, "BucketName":{ "shape":"S3BucketName", - "documentation":"

The name of the Amazon S3 bucket to which to save the received email.

" + "documentation":"

The name of the Amazon S3 bucket that incoming email will be saved to.

" }, "ObjectKeyPrefix":{ "shape":"S3KeyPrefix", @@ -2494,7 +3112,7 @@ }, "KmsKeyArn":{ "shape":"AmazonResourceName", - "documentation":"

The customer master key that Amazon SES should use to encrypt your emails before saving them to the Amazon S3 bucket. You can use the default master key or a custom master key you created in AWS KMS as follows:

For more information about key policies, see the AWS KMS Developer Guide. If you do not specify a master key, Amazon SES will not encrypt your emails.

Your mail is encrypted by Amazon SES using the Amazon S3 encryption client before the mail is submitted to Amazon S3 for storage. It is not encrypted using Amazon S3 server-side encryption. This means that you must use the Amazon S3 encryption client to decrypt the email after retrieving it from Amazon S3, as the service has no access to use your AWS KMS keys for decryption. This encryption client is currently available with the AWS Java SDK and AWS Ruby SDK only. For more information about client-side encryption using AWS KMS master keys, see the Amazon S3 Developer Guide.

" + "documentation":"

The customer master key that Amazon SES should use to encrypt your emails before saving them to the Amazon S3 bucket. You can use the default master key or a custom master key you created in AWS KMS as follows:

For more information about key policies, see the AWS KMS Developer Guide. If you do not specify a master key, Amazon SES will not encrypt your emails.

Your mail is encrypted by Amazon SES using the Amazon S3 encryption client before the mail is submitted to Amazon S3 for storage. It is not encrypted using Amazon S3 server-side encryption. This means that you must use the Amazon S3 encryption client to decrypt the email after retrieving it from Amazon S3, as the service has no access to use your AWS KMS keys for decryption. This encryption client is currently available with the AWS Java SDK and AWS Ruby SDK only. For more information about client-side encryption using AWS KMS master keys, see the Amazon S3 Developer Guide.

" } }, "documentation":"

When included in a receipt rule, this action saves the received message to an Amazon Simple Storage Service (Amazon S3) bucket and, optionally, publishes a notification to Amazon Simple Notification Service (Amazon SNS).

To enable Amazon SES to write emails to your Amazon S3 bucket, use an AWS KMS key to encrypt your emails, or publish to an Amazon SNS topic of another account, Amazon SES must have permission to access those resources. For information about giving permissions, see the Amazon SES Developer Guide.

When you save your emails to an Amazon S3 bucket, the maximum email size (including headers) is 30 MB. Emails larger than that will bounce.

For information about specifying Amazon S3 actions in receipt rules, see the Amazon SES Developer Guide.

" @@ -2523,6 +3141,17 @@ "Base64" ] }, + "SNSDestination":{ + "type":"structure", + "required":["TopicARN"], + "members":{ + "TopicARN":{ + "shape":"AmazonResourceName", + "documentation":"

The ARN of the Amazon SNS topic that email sending events will be published to. An example of an Amazon SNS topic ARN is arn:aws:sns:us-west-2:123456789012:MyTopic. For more information about Amazon SNS topics, see the Amazon SNS Developer Guide.

" + } + }, + "documentation":"

Contains the topic ARN associated with an Amazon Simple Notification Service (Amazon SNS) event destination.

Event destinations, such as Amazon SNS, are associated with configuration sets, which enable you to publish email sending events. For information about using configuration sets, see the Amazon SES Developer Guide.

" + }, "SendBounceRequest":{ "type":"structure", "required":[ @@ -2568,6 +3197,71 @@ }, "documentation":"

Represents a unique message ID.

" }, + "SendBulkTemplatedEmailRequest":{ + "type":"structure", + "required":[ + "Source", + "Template", + "Destinations" + ], + "members":{ + "Source":{ + "shape":"Address", + "documentation":"

The email address that is sending the email. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES. For information about verifying identities, see the Amazon SES Developer Guide.

If you are sending on behalf of another user and have been permitted to do so by a sending authorization policy, then you must also specify the SourceArn parameter. For more information about sending authorization, see the Amazon SES Developer Guide.

In all cases, the email address must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" + }, + "SourceArn":{ + "shape":"AmazonResourceName", + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to send for the email address specified in the Source parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to send from user@example.com, then you would specify the SourceArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the Source to be user@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + }, + "ReplyToAddresses":{ + "shape":"AddressList", + "documentation":"

The reply-to email address(es) for the message. If the recipient replies to the message, each reply-to address will receive the reply.

" + }, + "ReturnPath":{ + "shape":"Address", + "documentation":"

The email address that bounces and complaints will be forwarded to when feedback forwarding is enabled. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient's ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter. The ReturnPath parameter is never overwritten. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES.

" + }, + "ReturnPathArn":{ + "shape":"AmazonResourceName", + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to use the email address specified in the ReturnPath parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to use feedback@example.com, then you would specify the ReturnPathArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the ReturnPath to be feedback@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + }, + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set to use when you send an email using SendBulkTemplatedEmail.

" + }, + "DefaultTags":{ + "shape":"MessageTagList", + "documentation":"

A list of tags, in the form of name/value pairs, to apply to an email that you send to a destination using SendBulkTemplatedEmail.

" + }, + "Template":{ + "shape":"TemplateName", + "documentation":"

The template to use when sending this email.

" + }, + "TemplateArn":{ + "shape":"AmazonResourceName", + "documentation":"

The ARN of the template to use when sending this email.

" + }, + "DefaultTemplateData":{ + "shape":"TemplateData", + "documentation":"

A list of replacement values to apply to the template when replacement data is not specified in a Destination object. These values act as a default or fallback option when no other data is available.

The template data is a JSON object, typically consisting of key-value pairs in which the keys correspond to replacement tags in the email template.

" + }, + "Destinations":{ + "shape":"BulkEmailDestinationList", + "documentation":"

One or more Destination objects. All of the recipients in a Destination will receive the same version of the email. You can specify up to 50 Destination objects within a Destinations array.

" + } + }, + "documentation":"

Represents a request to send a templated email to multiple destinations using Amazon SES. For more information, see the Amazon SES Developer Guide.

" + }, + "SendBulkTemplatedEmailResponse":{ + "type":"structure", + "required":["Status"], + "members":{ + "Status":{ + "shape":"BulkEmailDestinationStatusList", + "documentation":"

The unique message identifier returned from the SendBulkTemplatedEmail action.

" + } + } + }, "SendDataPoint":{ "type":"structure", "members":{ @@ -2608,7 +3302,7 @@ "members":{ "Source":{ "shape":"Address", - "documentation":"

The email address that is sending the email. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES. For information about verifying identities, see the Amazon SES Developer Guide.

If you are sending on behalf of another user and have been permitted to do so by a sending authorization policy, then you must also specify the SourceArn parameter. For more information about sending authorization, see the Amazon SES Developer Guide.

In all cases, the email address must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" + "documentation":"

The email address that is sending the email. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES. For information about verifying identities, see the Amazon SES Developer Guide.

If you are sending on behalf of another user and have been permitted to do so by a sending authorization policy, then you must also specify the SourceArn parameter. For more information about sending authorization, see the Amazon SES Developer Guide.

In all cases, the email address must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" }, "Destination":{ "shape":"Destination", @@ -2624,15 +3318,15 @@ }, "ReturnPath":{ "shape":"Address", - "documentation":"

The email address to which bounces and complaints are to be forwarded when feedback forwarding is enabled. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient's ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter. The ReturnPath parameter is never overwritten. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES.

" + "documentation":"

The email address that bounces and complaints will be forwarded to when feedback forwarding is enabled. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient's ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter. The ReturnPath parameter is never overwritten. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES.

" }, "SourceArn":{ "shape":"AmazonResourceName", - "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to send for the email address specified in the Source parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to send from user@example.com, then you would specify the SourceArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the Source to be user@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to send for the email address specified in the Source parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to send from user@example.com, then you would specify the SourceArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the Source to be user@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" }, "ReturnPathArn":{ "shape":"AmazonResourceName", - "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to use the email address specified in the ReturnPath parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to use feedback@example.com, then you would specify the ReturnPathArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the ReturnPath to be feedback@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to use the email address specified in the ReturnPath parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to use feedback@example.com, then you would specify the ReturnPathArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the ReturnPath to be feedback@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" }, "Tags":{ "shape":"MessageTagList", @@ -2662,7 +3356,7 @@ "members":{ "Source":{ "shape":"Address", - "documentation":"

The identity's email address. If you do not provide a value for this parameter, you must specify a \"From\" address in the raw text of the message. (You can also specify both.)

By default, the string must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

If you specify the Source parameter and have feedback forwarding enabled, then bounces and complaints will be sent to this email address. This takes precedence over any Return-Path header that you might include in the raw text of the message.

" + "documentation":"

The identity's email address. If you do not provide a value for this parameter, you must specify a \"From\" address in the raw text of the message. (You can also specify both.)

By default, the string must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

If you specify the Source parameter and have feedback forwarding enabled, then bounces and complaints will be sent to this email address. This takes precedence over any Return-Path header that you might include in the raw text of the message.

" }, "Destinations":{ "shape":"AddressList", @@ -2670,7 +3364,7 @@ }, "RawMessage":{ "shape":"RawMessage", - "documentation":"

The raw text of the message. The client is responsible for ensuring the following:

" + "documentation":"

The raw text of the message. The client is responsible for ensuring the following:

" }, "FromArn":{ "shape":"AmazonResourceName", @@ -2706,6 +3400,72 @@ }, "documentation":"

Represents a unique message ID.

" }, + "SendTemplatedEmailRequest":{ + "type":"structure", + "required":[ + "Source", + "Destination", + "Template", + "TemplateData" + ], + "members":{ + "Source":{ + "shape":"Address", + "documentation":"

The email address that is sending the email. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES. For information about verifying identities, see the Amazon SES Developer Guide.

If you are sending on behalf of another user and have been permitted to do so by a sending authorization policy, then you must also specify the SourceArn parameter. For more information about sending authorization, see the Amazon SES Developer Guide.

In all cases, the email address must be 7-bit ASCII. If the text must contain any other characters, then you must use MIME encoded-word syntax (RFC 2047) instead of a literal string. MIME encoded-word syntax uses the following form: =?charset?encoding?encoded-text?=. For more information, see RFC 2047.

" + }, + "Destination":{ + "shape":"Destination", + "documentation":"

The destination for this email, composed of To:, CC:, and BCC: fields. A Destination can include up to 50 recipients across these three fields.

" + }, + "ReplyToAddresses":{ + "shape":"AddressList", + "documentation":"

The reply-to email address(es) for the message. If the recipient replies to the message, each reply-to address will receive the reply.

" + }, + "ReturnPath":{ + "shape":"Address", + "documentation":"

The email address that bounces and complaints will be forwarded to when feedback forwarding is enabled. If the message cannot be delivered to the recipient, then an error message will be returned from the recipient's ISP; this message will then be forwarded to the email address specified by the ReturnPath parameter. The ReturnPath parameter is never overwritten. This email address must be either individually verified with Amazon SES, or from a domain that has been verified with Amazon SES.

" + }, + "SourceArn":{ + "shape":"AmazonResourceName", + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to send for the email address specified in the Source parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to send from user@example.com, then you would specify the SourceArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the Source to be user@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + }, + "ReturnPathArn":{ + "shape":"AmazonResourceName", + "documentation":"

This parameter is used only for sending authorization. It is the ARN of the identity that is associated with the sending authorization policy that permits you to use the email address specified in the ReturnPath parameter.

For example, if the owner of example.com (which has ARN arn:aws:ses:us-east-1:123456789012:identity/example.com) attaches a policy to it that authorizes you to use feedback@example.com, then you would specify the ReturnPathArn to be arn:aws:ses:us-east-1:123456789012:identity/example.com, and the ReturnPath to be feedback@example.com.

For more information about sending authorization, see the Amazon SES Developer Guide.

" + }, + "Tags":{ + "shape":"MessageTagList", + "documentation":"

A list of tags, in the form of name/value pairs, to apply to an email that you send using SendTemplatedEmail. Tags correspond to characteristics of the email that you define, so that you can publish email sending events.

" + }, + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set to use when you send an email using SendTemplatedEmail.

" + }, + "Template":{ + "shape":"TemplateName", + "documentation":"

The template to use when sending this email.

" + }, + "TemplateArn":{ + "shape":"AmazonResourceName", + "documentation":"

The ARN of the template to use when sending this email.

" + }, + "TemplateData":{ + "shape":"TemplateData", + "documentation":"

A list of replacement values to apply to the template. This parameter is a JSON object, typically consisting of key-value pairs in which the keys correspond to replacement tags in the email template.

" + } + }, + "documentation":"

Represents a request to send a templated email using Amazon SES. For more information, see the Amazon SES Developer Guide.

" + }, + "SendTemplatedEmailResponse":{ + "type":"structure", + "required":["MessageId"], + "members":{ + "MessageId":{ + "shape":"MessageId", + "documentation":"

The unique message identifier returned from the SendTemplatedEmail action.

" + } + } + }, "SentLast24Hours":{"type":"double"}, "SetActiveReceiptRuleSetRequest":{ "type":"structure", @@ -2887,7 +3647,7 @@ "members":{ "Scope":{ "shape":"StopScope", - "documentation":"

The scope to which the Stop action applies. That is, what is being stopped.

" + "documentation":"

The name of the RuleSet that is being stopped.

" }, "TopicArn":{ "shape":"AmazonResourceName", @@ -2900,6 +3660,93 @@ "type":"string", "enum":["RuleSet"] }, + "SubjectPart":{"type":"string"}, + "Template":{ + "type":"structure", + "required":["TemplateName"], + "members":{ + "TemplateName":{ + "shape":"TemplateName", + "documentation":"

The name of the template. You will refer to this name when you send email using the SendTemplatedEmail or SendBulkTemplatedEmail operations.

" + }, + "SubjectPart":{ + "shape":"SubjectPart", + "documentation":"

The subject line of the email.

" + }, + "TextPart":{ + "shape":"TextPart", + "documentation":"

The email body that will be visible to recipients whose email clients do not display HTML.

" + }, + "HtmlPart":{ + "shape":"HtmlPart", + "documentation":"

The HTML body of the email.

" + } + }, + "documentation":"

The content of the email, composed of a subject line, an HTML part, and a text-only part.

" + }, + "TemplateData":{ + "type":"string", + "max":262144 + }, + "TemplateDoesNotExistException":{ + "type":"structure", + "members":{ + "TemplateName":{"shape":"TemplateName"} + }, + "documentation":"

Indicates that the Template object you specified does not exist in your Amazon SES account.

", + "error":{ + "code":"TemplateDoesNotExist", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "TemplateMetadata":{ + "type":"structure", + "members":{ + "Name":{ + "shape":"TemplateName", + "documentation":"

The name of the template.

" + }, + "CreatedTimestamp":{ + "shape":"Timestamp", + "documentation":"

The time and date the template was created.

" + } + }, + "documentation":"

Information about an email template.

" + }, + "TemplateMetadataList":{ + "type":"list", + "member":{"shape":"TemplateMetadata"} + }, + "TemplateName":{"type":"string"}, + "TestRenderTemplateRequest":{ + "type":"structure", + "required":[ + "TemplateName", + "TemplateData" + ], + "members":{ + "TemplateName":{ + "shape":"TemplateName", + "documentation":"

The name of the template that you want to render.

" + }, + "TemplateData":{ + "shape":"TemplateData", + "documentation":"

A list of replacement values to apply to the template. This parameter is a JSON object, typically consisting of key-value pairs in which the keys correspond to replacement tags in the email template.

" + } + } + }, + "TestRenderTemplateResponse":{ + "type":"structure", + "members":{ + "RenderedTemplate":{ + "shape":"RenderedTemplate", + "documentation":"

The complete MIME message rendered by applying the data in the TemplateData parameter to the template specified in the TemplateName parameter.

" + } + } + }, + "TextPart":{"type":"string"}, "Timestamp":{"type":"timestamp"}, "TlsPolicy":{ "type":"string", @@ -2908,6 +3755,58 @@ "Optional" ] }, + "TrackingOptions":{ + "type":"structure", + "members":{ + "CustomRedirectDomain":{ + "shape":"CustomRedirectDomain", + "documentation":"

The custom subdomain that will be used to redirect email recipients to the Amazon SES event tracking domain.

" + } + }, + "documentation":"

A domain that is used to redirect email recipients to an Amazon SES-operated domain. This domain captures open and click events generated by Amazon SES emails.

For more information, see Configuring Custom Domains to Handle Open and Click Tracking in the Amazon SES Developer Guide.

" + }, + "TrackingOptionsAlreadyExistsException":{ + "type":"structure", + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that a TrackingOptions object already exists in the specified configuration set.

" + } + }, + "documentation":"

Indicates that the configuration set you specified already contains a TrackingOptions object.

", + "error":{ + "code":"TrackingOptionsAlreadyExistsException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "TrackingOptionsDoesNotExistException":{ + "type":"structure", + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

Indicates that a TrackingOptions object does not exist in the specified configuration set.

" + } + }, + "documentation":"

Indicates that the TrackingOptions object you specified does not exist.

", + "error":{ + "code":"TrackingOptionsDoesNotExistException", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "UpdateAccountSendingEnabledRequest":{ + "type":"structure", + "members":{ + "Enabled":{ + "shape":"Enabled", + "documentation":"

Describes whether email sending is enabled or disabled for your Amazon SES account.

" + } + }, + "documentation":"

Represents a request to enable or disable the email sending capabilities for your entire Amazon SES account.

" + }, "UpdateConfigurationSetEventDestinationRequest":{ "type":"structure", "required":[ @@ -2917,7 +3816,7 @@ "members":{ "ConfigurationSetName":{ "shape":"ConfigurationSetName", - "documentation":"

The name of the configuration set that you want to update.

" + "documentation":"

The name of the configuration set that contains the event destination that you want to update.

" }, "EventDestination":{ "shape":"EventDestination", @@ -2932,6 +3831,63 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "UpdateConfigurationSetReputationMetricsEnabledRequest":{ + "type":"structure", + "required":[ + "ConfigurationSetName", + "Enabled" + ], + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set that you want to update.

" + }, + "Enabled":{ + "shape":"Enabled", + "documentation":"

Describes whether or not Amazon SES will publish reputation metrics for the configuration set, such as bounce and complaint rates, to Amazon CloudWatch.

" + } + }, + "documentation":"

Represents a request to modify the reputation metric publishing settings for a configuration set.

" + }, + "UpdateConfigurationSetSendingEnabledRequest":{ + "type":"structure", + "required":[ + "ConfigurationSetName", + "Enabled" + ], + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set that you want to update.

" + }, + "Enabled":{ + "shape":"Enabled", + "documentation":"

Describes whether email sending is enabled or disabled for the configuration set.

" + } + }, + "documentation":"

Represents a request to enable or disable the email sending capabilities for a specific configuration set.

" + }, + "UpdateConfigurationSetTrackingOptionsRequest":{ + "type":"structure", + "required":[ + "ConfigurationSetName", + "TrackingOptions" + ], + "members":{ + "ConfigurationSetName":{ + "shape":"ConfigurationSetName", + "documentation":"

The name of the configuration set for which you want to update the custom tracking domain.

" + }, + "TrackingOptions":{"shape":"TrackingOptions"} + }, + "documentation":"

Represents a request to update the tracking options for a configuration set.

" + }, + "UpdateConfigurationSetTrackingOptionsResponse":{ + "type":"structure", + "members":{ + }, + "documentation":"

An empty element returned on a successful request.

" + }, "UpdateReceiptRuleRequest":{ "type":"structure", "required":[ @@ -2941,7 +3897,7 @@ "members":{ "RuleSetName":{ "shape":"ReceiptRuleSetName", - "documentation":"

The name of the receipt rule set to which the receipt rule belongs.

" + "documentation":"

The name of the receipt rule set that the receipt rule belongs to.

" }, "Rule":{ "shape":"ReceiptRule", @@ -2956,6 +3912,18 @@ }, "documentation":"

An empty element returned on a successful request.

" }, + "UpdateTemplateRequest":{ + "type":"structure", + "required":["Template"], + "members":{ + "Template":{"shape":"Template"} + } + }, + "UpdateTemplateResponse":{ + "type":"structure", + "members":{ + } + }, "VerificationAttributes":{ "type":"map", "key":{"shape":"Identity"}, @@ -3015,7 +3983,7 @@ "members":{ "VerificationToken":{ "shape":"VerificationToken", - "documentation":"

A TXT record that must be placed in the DNS settings for the domain, in order to complete domain verification.

" + "documentation":"

A TXT record that you must place in the DNS settings of the domain to complete domain verification with Amazon SES.

As Amazon SES searches for the TXT record, the domain's verification status is \"Pending\". When Amazon SES detects the record, the domain's verification status changes to \"Success\". If Amazon SES is unable to detect the record within 72 hours, the domain's verification status changes to \"Failed.\" In that case, if you still want to verify the domain, you must restart the verification process from the beginning.

" } }, "documentation":"

Returns a TXT record that you must publish to the DNS server of your domain to complete domain verification with Amazon SES.

" @@ -3064,5 +4032,5 @@ "documentation":"

When included in a receipt rule, this action calls Amazon WorkMail and, optionally, publishes a notification to Amazon Simple Notification Service (Amazon SNS). You will typically not use this action directly because Amazon WorkMail adds the rule automatically during its setup procedure.

For information using a receipt rule to call Amazon WorkMail, see the Amazon SES Developer Guide.

" } }, - "documentation":"Amazon Simple Email Service

This is the API Reference for Amazon Simple Email Service (Amazon SES). This documentation is intended to be used in conjunction with the Amazon SES Developer Guide.

For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES in the Amazon SES Developer Guide.

" + "documentation":"Amazon Simple Email Service

This is the API Reference for Amazon Simple Email Service (Amazon SES). This documentation is intended to be used in conjunction with the Amazon SES Developer Guide.

For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES in the Amazon SES Developer Guide.

" } diff --git a/services/simpleworkflow/src/main/resources/codegen-resources/examples-1.json b/services/simpleworkflow/src/main/resources/codegen-resources/examples-1.json index 4597e13276be..0ea7e3b0bbe9 100644 --- a/services/simpleworkflow/src/main/resources/codegen-resources/examples-1.json +++ b/services/simpleworkflow/src/main/resources/codegen-resources/examples-1.json @@ -1,5 +1,5 @@ { - "version": "1.0", - "examples": { - } + "version": "1.0", + "examples": { + } } diff --git a/services/simpleworkflow/src/main/resources/codegen-resources/service-2.json b/services/simpleworkflow/src/main/resources/codegen-resources/service-2.json index c96f4a4915c1..b9b2aaea6a92 100644 --- a/services/simpleworkflow/src/main/resources/codegen-resources/service-2.json +++ b/services/simpleworkflow/src/main/resources/codegen-resources/service-2.json @@ -1,18 +1,17 @@ { "version":"2.0", "metadata":{ - "uid":"swf-2012-01-25", "apiVersion":"2012-01-25", "endpointPrefix":"swf", "jsonVersion":"1.0", + "protocol":"json", "serviceAbbreviation":"Amazon SWF", "serviceFullName":"Amazon Simple Workflow Service", "signatureVersion":"v4", "targetPrefix":"SimpleWorkflowService", "timestampFormat":"unixTimestamp", - "protocol":"json" + "uid":"swf-2012-01-25" }, - "documentation":"Amazon Simple Workflow Service

The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that use Amazon's cloud to coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your workflow. Coordinating tasks in a workflow involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application.

Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state.

This documentation serves as reference only. For a broader overview of the Amazon SWF programming model, see the Amazon SWF Developer Guide.

", "operations":{ "CountClosedWorkflowExecutions":{ "name":"CountClosedWorkflowExecutions", @@ -21,23 +20,12 @@ "requestUri":"/" }, "input":{"shape":"CountClosedWorkflowExecutionsInput"}, - "output":{ - "shape":"WorkflowExecutionCount", - "documentation":"

Contains the count of workflow executions returned from CountOpenWorkflowExecutions or CountClosedWorkflowExecutions

" - }, + "output":{"shape":"WorkflowExecutionCount"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the number of closed workflow executions within the given domain that meet the specified filtering criteria.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the number of closed workflow executions within the given domain that meet the specified filtering criteria.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CountOpenWorkflowExecutions":{ "name":"CountOpenWorkflowExecutions", @@ -46,23 +34,12 @@ "requestUri":"/" }, "input":{"shape":"CountOpenWorkflowExecutionsInput"}, - "output":{ - "shape":"WorkflowExecutionCount", - "documentation":"

Contains the count of workflow executions returned from CountOpenWorkflowExecutions or CountClosedWorkflowExecutions

" - }, + "output":{"shape":"WorkflowExecutionCount"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the number of open workflow executions within the given domain that meet the specified filtering criteria.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the number of open workflow executions within the given domain that meet the specified filtering criteria.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CountPendingActivityTasks":{ "name":"CountPendingActivityTasks", @@ -71,23 +48,12 @@ "requestUri":"/" }, "input":{"shape":"CountPendingActivityTasksInput"}, - "output":{ - "shape":"PendingTaskCount", - "documentation":"

Contains the count of tasks in a task list.

" - }, + "output":{"shape":"PendingTaskCount"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the estimated number of activity tasks in the specified task list. The count returned is an approximation and is not guaranteed to be exact. If you specify a task list that no activity task was ever scheduled in then 0 will be returned.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the estimated number of activity tasks in the specified task list. The count returned is an approximation and isn't guaranteed to be exact. If you specify a task list that no activity task was ever scheduled in then 0 is returned.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CountPendingDecisionTasks":{ "name":"CountPendingDecisionTasks", @@ -96,23 +62,12 @@ "requestUri":"/" }, "input":{"shape":"CountPendingDecisionTasksInput"}, - "output":{ - "shape":"PendingTaskCount", - "documentation":"

Contains the count of tasks in a task list.

" - }, + "output":{"shape":"PendingTaskCount"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the estimated number of decision tasks in the specified task list. The count returned is an approximation and is not guaranteed to be exact. If you specify a task list that no decision task was ever scheduled in then 0 will be returned.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the estimated number of decision tasks in the specified task list. The count returned is an approximation and isn't guaranteed to be exact. If you specify a task list that no decision task was ever scheduled in then 0 is returned.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DeprecateActivityType":{ "name":"DeprecateActivityType", @@ -122,23 +77,11 @@ }, "input":{"shape":"DeprecateActivityTypeInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"TypeDeprecatedFault", - "exception":true, - "documentation":"

Returned when the specified activity or workflow type was already deprecated.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"TypeDeprecatedFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Deprecates the specified activity type. After an activity type has been deprecated, you cannot create new tasks of that activity type. Tasks of this type that were scheduled before the type was deprecated will continue to run.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Deprecates the specified activity type. After an activity type has been deprecated, you cannot create new tasks of that activity type. Tasks of this type that were scheduled before the type was deprecated continue to run.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DeprecateDomain":{ "name":"DeprecateDomain", @@ -148,23 +91,11 @@ }, "input":{"shape":"DeprecateDomainInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"DomainDeprecatedFault", - "exception":true, - "documentation":"

Returned when the specified domain has been deprecated.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"DomainDeprecatedFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Deprecates the specified domain. After a domain has been deprecated it cannot be used to create new workflow executions or register new types. However, you can still use visibility actions on this domain. Deprecating a domain also deprecates all activity and workflow types registered in the domain. Executions that were started before the domain was deprecated will continue to run.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Deprecates the specified domain. After a domain has been deprecated it cannot be used to create new workflow executions or register new types. However, you can still use visibility actions on this domain. Deprecating a domain also deprecates all activity and workflow types registered in the domain. Executions that were started before the domain was deprecated continues to run.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DeprecateWorkflowType":{ "name":"DeprecateWorkflowType", @@ -174,23 +105,11 @@ }, "input":{"shape":"DeprecateWorkflowTypeInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"TypeDeprecatedFault", - "exception":true, - "documentation":"

Returned when the specified activity or workflow type was already deprecated.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"TypeDeprecatedFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Deprecates the specified workflow type. After a workflow type has been deprecated, you cannot create new executions of that type. Executions that were started before the type was deprecated will continue to run. A deprecated workflow type may still be used when calling visibility actions.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Deprecates the specified workflow type. After a workflow type has been deprecated, you cannot create new executions of that type. Executions that were started before the type was deprecated continues to run. A deprecated workflow type may still be used when calling visibility actions.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DescribeActivityType":{ "name":"DescribeActivityType", @@ -199,23 +118,12 @@ "requestUri":"/" }, "input":{"shape":"DescribeActivityTypeInput"}, - "output":{ - "shape":"ActivityTypeDetail", - "documentation":"

Detailed information about an activity type.

" - }, + "output":{"shape":"ActivityTypeDetail"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns information about the specified activity type. This includes configuration settings provided when the type was registered and other general information about the type.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about the specified activity type. This includes configuration settings provided when the type was registered and other general information about the type.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DescribeDomain":{ "name":"DescribeDomain", @@ -224,23 +132,12 @@ "requestUri":"/" }, "input":{"shape":"DescribeDomainInput"}, - "output":{ - "shape":"DomainDetail", - "documentation":"

Contains details of a domain.

" - }, + "output":{"shape":"DomainDetail"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns information about the specified domain, including description and status.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about the specified domain, including description and status.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DescribeWorkflowExecution":{ "name":"DescribeWorkflowExecution", @@ -249,23 +146,12 @@ "requestUri":"/" }, "input":{"shape":"DescribeWorkflowExecutionInput"}, - "output":{ - "shape":"WorkflowExecutionDetail", - "documentation":"

Contains details about a workflow execution.

" - }, + "output":{"shape":"WorkflowExecutionDetail"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns information about the specified workflow execution including its type and some statistics.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about the specified workflow execution including its type and some statistics.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "DescribeWorkflowType":{ "name":"DescribeWorkflowType", @@ -274,23 +160,12 @@ "requestUri":"/" }, "input":{"shape":"DescribeWorkflowTypeInput"}, - "output":{ - "shape":"WorkflowTypeDetail", - "documentation":"

Contains details about a workflow type.

" - }, + "output":{"shape":"WorkflowTypeDetail"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns information about the specified workflow type. This includes configuration settings specified when the type was registered and other information such as creation date, current status, and so on.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about the specified workflow type. This includes configuration settings specified when the type was registered and other information such as creation date, current status, etc.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "GetWorkflowExecutionHistory":{ "name":"GetWorkflowExecutionHistory", @@ -299,23 +174,12 @@ "requestUri":"/" }, "input":{"shape":"GetWorkflowExecutionHistoryInput"}, - "output":{ - "shape":"History", - "documentation":"

Paginated representation of a workflow history for a workflow execution. This is the up to date, complete and authoritative record of the events related to all tasks and events in the life of the workflow execution.

" - }, + "output":{"shape":"History"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the history of the specified workflow execution. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the history of the specified workflow execution. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ListActivityTypes":{ "name":"ListActivityTypes", @@ -324,23 +188,12 @@ "requestUri":"/" }, "input":{"shape":"ListActivityTypesInput"}, - "output":{ - "shape":"ActivityTypeInfos", - "documentation":"

Contains a paginated list of activity type information structures.

" - }, + "output":{"shape":"ActivityTypeInfos"}, "errors":[ - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - }, - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - } + {"shape":"OperationNotPermittedFault"}, + {"shape":"UnknownResourceFault"} ], - "documentation":"

Returns information about all activities registered in the specified domain that match the specified name and registration status. The result includes information like creation date, current status of the activity, etc. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about all activities registered in the specified domain that match the specified name and registration status. The result includes information like creation date, current status of the activity, etc. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ListClosedWorkflowExecutions":{ "name":"ListClosedWorkflowExecutions", @@ -349,23 +202,12 @@ "requestUri":"/" }, "input":{"shape":"ListClosedWorkflowExecutionsInput"}, - "output":{ - "shape":"WorkflowExecutionInfos", - "documentation":"

Contains a paginated list of information about workflow executions.

" - }, + "output":{"shape":"WorkflowExecutionInfos"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns a list of closed workflow executions in the specified domain that meet the filtering criteria. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns a list of closed workflow executions in the specified domain that meet the filtering criteria. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ListDomains":{ "name":"ListDomains", @@ -374,18 +216,11 @@ "requestUri":"/" }, "input":{"shape":"ListDomainsInput"}, - "output":{ - "shape":"DomainInfos", - "documentation":"

Contains a paginated collection of DomainInfo structures.

" - }, + "output":{"shape":"DomainInfos"}, "errors":[ - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns the list of domains registered in the account. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns the list of domains registered in the account. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ListOpenWorkflowExecutions":{ "name":"ListOpenWorkflowExecutions", @@ -394,23 +229,12 @@ "requestUri":"/" }, "input":{"shape":"ListOpenWorkflowExecutionsInput"}, - "output":{ - "shape":"WorkflowExecutionInfos", - "documentation":"

Contains a paginated list of information about workflow executions.

" - }, + "output":{"shape":"WorkflowExecutionInfos"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Returns a list of open workflow executions in the specified domain that meet the filtering criteria. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns a list of open workflow executions in the specified domain that meet the filtering criteria. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextPageToken returned by the initial call.

This operation is eventually consistent. The results are best effort and may not exactly reflect recent updates and changes.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ListWorkflowTypes":{ "name":"ListWorkflowTypes", @@ -419,23 +243,12 @@ "requestUri":"/" }, "input":{"shape":"ListWorkflowTypesInput"}, - "output":{ - "shape":"WorkflowTypeInfos", - "documentation":"

Contains a paginated list of information structures about workflow types.

" - }, + "output":{"shape":"WorkflowTypeInfos"}, "errors":[ - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - }, - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - } + {"shape":"OperationNotPermittedFault"}, + {"shape":"UnknownResourceFault"} ], - "documentation":"

Returns information about workflow types in the specified domain. The results may be split into multiple pages that can be retrieved by making the call repeatedly.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Returns information about workflow types in the specified domain. The results may be split into multiple pages that can be retrieved by making the call repeatedly.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "PollForActivityTask":{ "name":"PollForActivityTask", @@ -444,28 +257,13 @@ "requestUri":"/" }, "input":{"shape":"PollForActivityTaskInput"}, - "output":{ - "shape":"ActivityTask", - "documentation":"

Unit of work sent to an activity worker.

" - }, + "output":{"shape":"ActivityTask"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"}, + {"shape":"LimitExceededFault"} ], - "documentation":"

Used by workers to get an ActivityTask from the specified activity taskList. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available. The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll will return an empty result. An empty result, in this context, means that an ActivityTask is returned, but that the value of taskToken is an empty string. If a task is returned, the worker should use its type to identify and process it correctly.

Workers should set their client side socket timeout to at least 70 seconds (10 seconds higher than the maximum time service may hold the poll request).

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by workers to get an ActivityTask from the specified activity taskList. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available. The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll returns an empty result. An empty result, in this context, means that an ActivityTask is returned, but that the value of taskToken is an empty string. If a task is returned, the worker should use its type to identify and process it correctly.

Workers should set their client side socket timeout to at least 70 seconds (10 seconds higher than the maximum time service may hold the poll request).

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "PollForDecisionTask":{ "name":"PollForDecisionTask", @@ -474,28 +272,13 @@ "requestUri":"/" }, "input":{"shape":"PollForDecisionTaskInput"}, - "output":{ - "shape":"DecisionTask", - "documentation":"

A structure that represents a decision task. Decision tasks are sent to deciders in order for them to make decisions.

" - }, + "output":{"shape":"DecisionTask"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"}, + {"shape":"LimitExceededFault"} ], - "documentation":"

Used by deciders to get a DecisionTask from the specified decision taskList. A decision task may be returned for any open workflow execution that is using the specified task list. The task includes a paginated view of the history of the workflow execution. The decider should use the workflow type and the history to determine how to properly handle the task.

This action initiates a long poll, where the service holds the HTTP connection open and responds as soon a task becomes available. If no decision task is available in the specified task list before the timeout of 60 seconds expires, an empty result is returned. An empty result, in this context, means that a DecisionTask is returned, but that the value of taskToken is an empty string.

Deciders should set their client-side socket timeout to at least 70 seconds (10 seconds higher than the timeout). Because the number of workflow history events for a single workflow execution might be very large, the result returned might be split up across a number of pages. To retrieve subsequent pages, make additional calls to PollForDecisionTask using the nextPageToken returned by the initial call. Note that you do not call GetWorkflowExecutionHistory with this nextPageToken. Instead, call PollForDecisionTask again.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by deciders to get a DecisionTask from the specified decision taskList. A decision task may be returned for any open workflow execution that is using the specified task list. The task includes a paginated view of the history of the workflow execution. The decider should use the workflow type and the history to determine how to properly handle the task.

This action initiates a long poll, where the service holds the HTTP connection open and responds as soon a task becomes available. If no decision task is available in the specified task list before the timeout of 60 seconds expires, an empty result is returned. An empty result, in this context, means that a DecisionTask is returned, but that the value of taskToken is an empty string.

Deciders should set their client side socket timeout to at least 70 seconds (10 seconds higher than the timeout).

Because the number of workflow history events for a single workflow execution might be very large, the result returned might be split up across a number of pages. To retrieve subsequent pages, make additional calls to PollForDecisionTask using the nextPageToken returned by the initial call. Note that you do not call GetWorkflowExecutionHistory with this nextPageToken. Instead, call PollForDecisionTask again.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RecordActivityTaskHeartbeat":{ "name":"RecordActivityTaskHeartbeat", @@ -504,23 +287,12 @@ "requestUri":"/" }, "input":{"shape":"RecordActivityTaskHeartbeatInput"}, - "output":{ - "shape":"ActivityTaskStatus", - "documentation":"

Status information about an activity task.

" - }, + "output":{"shape":"ActivityTaskStatus"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Used by activity workers to report to the service that the ActivityTask represented by the specified taskToken is still making progress. The worker can also (optionally) specify details of the progress, for example percent complete, using the details parameter. This action can also be used by the worker as a mechanism to check if cancellation is being requested for the activity task. If a cancellation is being attempted for the specified task, then the boolean cancelRequested flag returned by the service is set to true.

This action resets the taskHeartbeatTimeout clock. The taskHeartbeatTimeout is specified in RegisterActivityType.

This action does not in itself create an event in the workflow execution history. However, if the task times out, the workflow execution history will contain a ActivityTaskTimedOut event that contains the information from the last heartbeat generated by the activity worker.

The taskStartToCloseTimeout of an activity type is the maximum duration of an activity task, regardless of the number of RecordActivityTaskHeartbeat requests received. The taskStartToCloseTimeout is also specified in RegisterActivityType. This operation is only useful for long-lived activities to report liveliness of the task and to determine if a cancellation is being attempted. If the cancelRequested flag returns true, a cancellation is being attempted. If the worker can cancel the activity, it should respond with RespondActivityTaskCanceled. Otherwise, it should ignore the cancellation request.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by activity workers to report to the service that the ActivityTask represented by the specified taskToken is still making progress. The worker can also specify details of the progress, for example percent complete, using the details parameter. This action can also be used by the worker as a mechanism to check if cancellation is being requested for the activity task. If a cancellation is being attempted for the specified task, then the boolean cancelRequested flag returned by the service is set to true.

This action resets the taskHeartbeatTimeout clock. The taskHeartbeatTimeout is specified in RegisterActivityType.

This action doesn't in itself create an event in the workflow execution history. However, if the task times out, the workflow execution history contains a ActivityTaskTimedOut event that contains the information from the last heartbeat generated by the activity worker.

The taskStartToCloseTimeout of an activity type is the maximum duration of an activity task, regardless of the number of RecordActivityTaskHeartbeat requests received. The taskStartToCloseTimeout is also specified in RegisterActivityType.

This operation is only useful for long-lived activities to report liveliness of the task and to determine if a cancellation is being attempted.

If the cancelRequested flag returns true, a cancellation is being attempted. If the worker can cancel the activity, it should respond with RespondActivityTaskCanceled. Otherwise, it should ignore the cancellation request.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RegisterActivityType":{ "name":"RegisterActivityType", @@ -530,28 +302,12 @@ }, "input":{"shape":"RegisterActivityTypeInput"}, "errors":[ - { - "shape":"TypeAlreadyExistsFault", - "exception":true, - "documentation":"

Returned if the type already exists in the specified domain. You will get this fault even if the existing type is in deprecated status. You can specify another version if the intent is to create a new distinct version of the type.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - }, - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"TypeAlreadyExistsFault"}, + {"shape":"LimitExceededFault"}, + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Registers a new activity type along with its configuration settings in the specified domain.

A TypeAlreadyExists fault is returned if the type already exists in the domain. You cannot change any configuration settings of the type after its registration, and it must be registered as a new version.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Registers a new activity type along with its configuration settings in the specified domain.

A TypeAlreadyExists fault is returned if the type already exists in the domain. You cannot change any configuration settings of the type after its registration, and it must be registered as a new version.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RegisterDomain":{ "name":"RegisterDomain", @@ -561,23 +317,11 @@ }, "input":{"shape":"RegisterDomainInput"}, "errors":[ - { - "shape":"DomainAlreadyExistsFault", - "exception":true, - "documentation":"

Returned if the specified domain already exists. You will get this fault even if the existing domain is in deprecated status.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"DomainAlreadyExistsFault"}, + {"shape":"LimitExceededFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Registers a new domain.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Registers a new domain.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RegisterWorkflowType":{ "name":"RegisterWorkflowType", @@ -587,28 +331,12 @@ }, "input":{"shape":"RegisterWorkflowTypeInput"}, "errors":[ - { - "shape":"TypeAlreadyExistsFault", - "exception":true, - "documentation":"

Returned if the type already exists in the specified domain. You will get this fault even if the existing type is in deprecated status. You can specify another version if the intent is to create a new distinct version of the type.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - }, - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"TypeAlreadyExistsFault"}, + {"shape":"LimitExceededFault"}, + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Registers a new workflow type and its configuration settings in the specified domain.

The retention period for the workflow history is set by the RegisterDomain action.

If the type already exists, then a TypeAlreadyExists fault is returned. You cannot change the configuration settings of a workflow type once it is registered and it must be registered as a new version.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Registers a new workflow type and its configuration settings in the specified domain.

The retention period for the workflow history is set by the RegisterDomain action.

If the type already exists, then a TypeAlreadyExists fault is returned. You cannot change the configuration settings of a workflow type once it is registered and it must be registered as a new version.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RequestCancelWorkflowExecution":{ "name":"RequestCancelWorkflowExecution", @@ -618,18 +346,10 @@ }, "input":{"shape":"RequestCancelWorkflowExecutionInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Records a WorkflowExecutionCancelRequested event in the currently running workflow execution identified by the given domain, workflowId, and runId. This logically requests the cancellation of the workflow execution as a whole. It is up to the decider to take appropriate actions when it receives an execution history with this event.

If the runId is not specified, the WorkflowExecutionCancelRequested event is recorded in the history of the current open workflow execution with the specified workflowId in the domain. Because this action allows the workflow to properly clean up and gracefully close, it should be used instead of TerminateWorkflowExecution when possible.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Records a WorkflowExecutionCancelRequested event in the currently running workflow execution identified by the given domain, workflowId, and runId. This logically requests the cancellation of the workflow execution as a whole. It is up to the decider to take appropriate actions when it receives an execution history with this event.

If the runId isn't specified, the WorkflowExecutionCancelRequested event is recorded in the history of the current open workflow execution with the specified workflowId in the domain.

Because this action allows the workflow to properly clean up and gracefully close, it should be used instead of TerminateWorkflowExecution when possible.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RespondActivityTaskCanceled":{ "name":"RespondActivityTaskCanceled", @@ -639,18 +359,10 @@ }, "input":{"shape":"RespondActivityTaskCanceledInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken was successfully canceled. Additional details can be optionally provided using the details argument.

These details (if provided) appear in the ActivityTaskCanceled event added to the workflow history.

Only use this operation if the canceled flag of a RecordActivityTaskHeartbeat request returns true and if the activity can be safely undone or abandoned.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken was successfully canceled. Additional details can be provided using the details argument.

These details (if provided) appear in the ActivityTaskCanceled event added to the workflow history.

Only use this operation if the canceled flag of a RecordActivityTaskHeartbeat request returns true and if the activity can be safely undone or abandoned.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RespondActivityTaskCompleted":{ "name":"RespondActivityTaskCompleted", @@ -660,18 +372,10 @@ }, "input":{"shape":"RespondActivityTaskCompletedInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken completed successfully with a result (if provided). The result appears in the ActivityTaskCompleted event in the workflow history.

If the requested task does not complete successfully, use RespondActivityTaskFailed instead. If the worker finds that the task is canceled through the canceled flag returned by RecordActivityTaskHeartbeat, it should cancel the task, clean up and then call RespondActivityTaskCanceled.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken completed successfully with a result (if provided). The result appears in the ActivityTaskCompleted event in the workflow history.

If the requested task doesn't complete successfully, use RespondActivityTaskFailed instead. If the worker finds that the task is canceled through the canceled flag returned by RecordActivityTaskHeartbeat, it should cancel the task, clean up and then call RespondActivityTaskCanceled.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RespondActivityTaskFailed":{ "name":"RespondActivityTaskFailed", @@ -681,18 +385,10 @@ }, "input":{"shape":"RespondActivityTaskFailedInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken has failed with reason (if specified). The reason and details appear in the ActivityTaskFailed event added to the workflow history.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by workers to tell the service that the ActivityTask identified by the taskToken has failed with reason (if specified). The reason and details appear in the ActivityTaskFailed event added to the workflow history.

A task is considered open from the time that it is scheduled until it is closed. Therefore a task is reported as open while a worker is processing it. A task is closed after it has been specified in a call to RespondActivityTaskCompleted, RespondActivityTaskCanceled, RespondActivityTaskFailed, or the task has timed out.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RespondDecisionTaskCompleted":{ "name":"RespondDecisionTaskCompleted", @@ -702,18 +398,10 @@ }, "input":{"shape":"RespondDecisionTaskCompletedInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Used by deciders to tell the service that the DecisionTask identified by the taskToken has successfully completed. The decisions argument specifies the list of decisions made while processing the task.

A DecisionTaskCompleted event is added to the workflow history. The executionContext specified is attached to the event in the workflow execution history.

Access Control

If an IAM policy grants permission to use RespondDecisionTaskCompleted, it can express permissions for the list of decisions in the decisions parameter. Each of the decisions has one or more parameters, much like a regular API call. To allow for policies to be as readable as possible, you can express permissions on decisions as if they were actual API calls, including applying conditions to some parameters. For more information, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Used by deciders to tell the service that the DecisionTask identified by the taskToken has successfully completed. The decisions argument specifies the list of decisions made while processing the task.

A DecisionTaskCompleted event is added to the workflow history. The executionContext specified is attached to the event in the workflow execution history.

Access Control

If an IAM policy grants permission to use RespondDecisionTaskCompleted, it can express permissions for the list of decisions in the decisions parameter. Each of the decisions has one or more parameters, much like a regular API call. To allow for policies to be as readable as possible, you can express permissions on decisions as if they were actual API calls, including applying conditions to some parameters. For more information, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "SignalWorkflowExecution":{ "name":"SignalWorkflowExecution", @@ -723,18 +411,10 @@ }, "input":{"shape":"SignalWorkflowExecutionInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Records a WorkflowExecutionSignaled event in the workflow execution history and creates a decision task for the workflow execution identified by the given domain, workflowId and runId. The event is recorded with the specified user defined signalName and input (if provided).

If a runId is not specified, then the WorkflowExecutionSignaled event is recorded in the history of the current open workflow with the matching workflowId in the domain. If the specified workflow execution is not open, this method fails with UnknownResource.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Records a WorkflowExecutionSignaled event in the workflow execution history and creates a decision task for the workflow execution identified by the given domain, workflowId and runId. The event is recorded with the specified user defined signalName and input (if provided).

If a runId isn't specified, then the WorkflowExecutionSignaled event is recorded in the history of the current open workflow with the matching workflowId in the domain.

If the specified workflow execution isn't open, this method fails with UnknownResource.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "StartWorkflowExecution":{ "name":"StartWorkflowExecution", @@ -743,42 +423,16 @@ "requestUri":"/" }, "input":{"shape":"StartWorkflowExecutionInput"}, - "output":{ - "shape":"Run", - "documentation":"

Specifies the runId of a workflow execution.

" - }, + "output":{"shape":"Run"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"TypeDeprecatedFault", - "exception":true, - "documentation":"

Returned when the specified activity or workflow type was already deprecated.

" - }, - { - "shape":"WorkflowExecutionAlreadyStartedFault", - "exception":true, - "documentation":"

Returned by StartWorkflowExecution when an open execution with the same workflowId is already running in the specified domain.

" - }, - { - "shape":"LimitExceededFault", - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - }, - { - "shape":"DefaultUndefinedFault", - "exception":true - } + {"shape":"UnknownResourceFault"}, + {"shape":"TypeDeprecatedFault"}, + {"shape":"WorkflowExecutionAlreadyStartedFault"}, + {"shape":"LimitExceededFault"}, + {"shape":"OperationNotPermittedFault"}, + {"shape":"DefaultUndefinedFault"} ], - "documentation":"

Starts an execution of the workflow type in the specified domain using the provided workflowId and input data.

This action returns the newly started workflow execution.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Starts an execution of the workflow type in the specified domain using the provided workflowId and input data.

This action returns the newly started workflow execution.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "TerminateWorkflowExecution":{ "name":"TerminateWorkflowExecution", @@ -788,25 +442,17 @@ }, "input":{"shape":"TerminateWorkflowExecutionInput"}, "errors":[ - { - "shape":"UnknownResourceFault", - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" - }, - { - "shape":"OperationNotPermittedFault", - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" - } + {"shape":"UnknownResourceFault"}, + {"shape":"OperationNotPermittedFault"} ], - "documentation":"

Records a WorkflowExecutionTerminated event and forces closure of the workflow execution identified by the given domain, runId, and workflowId. The child policy, registered with the workflow type or specified when starting this execution, is applied to any open child workflow executions of this workflow execution.

If the identified workflow execution was in progress, it is terminated immediately. If a runId is not specified, then the WorkflowExecutionTerminated event is recorded in the history of the current open workflow with the matching workflowId in the domain. You should consider using RequestCancelWorkflowExecution action instead because it allows the workflow to gracefully close while TerminateWorkflowExecution does not.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Records a WorkflowExecutionTerminated event and forces closure of the workflow execution identified by the given domain, runId, and workflowId. The child policy, registered with the workflow type or specified when starting this execution, is applied to any open child workflow executions of this workflow execution.

If the identified workflow execution was in progress, it is terminated immediately.

If a runId isn't specified, then the WorkflowExecutionTerminated event is recorded in the history of the current open workflow with the matching workflowId in the domain.

You should consider using RequestCancelWorkflowExecution action instead because it allows the workflow to gracefully close while TerminateWorkflowExecution doesn't.

Access Control

You can use IAM policies to control this action's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" } }, "shapes":{ "ActivityId":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "ActivityTask":{ "type":"structure", @@ -861,7 +507,7 @@ "documentation":"

The unique ID of the task.

" } }, - "documentation":"

Provides details of the ActivityTaskCancelRequested event.

" + "documentation":"

Provides the details of the ActivityTaskCancelRequested event.

" }, "ActivityTaskCanceledEventAttributes":{ "type":"structure", @@ -872,7 +518,7 @@ "members":{ "details":{ "shape":"Data", - "documentation":"

Details of the cancellation (if any).

" + "documentation":"

Details of the cancellation.

" }, "scheduledEventId":{ "shape":"EventId", @@ -887,7 +533,7 @@ "documentation":"

If set, contains the ID of the last ActivityTaskCancelRequested event recorded for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ActivityTaskCanceled event.

" + "documentation":"

Provides the details of the ActivityTaskCanceled event.

" }, "ActivityTaskCompletedEventAttributes":{ "type":"structure", @@ -898,7 +544,7 @@ "members":{ "result":{ "shape":"Data", - "documentation":"

The results of the activity task (if any).

" + "documentation":"

The results of the activity task.

" }, "scheduledEventId":{ "shape":"EventId", @@ -909,7 +555,7 @@ "documentation":"

The ID of the ActivityTaskStarted event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ActivityTaskCompleted event.

" + "documentation":"

Provides the details of the ActivityTaskCompleted event.

" }, "ActivityTaskFailedEventAttributes":{ "type":"structure", @@ -920,11 +566,11 @@ "members":{ "reason":{ "shape":"FailureReason", - "documentation":"

The reason provided for the failure (if any).

" + "documentation":"

The reason provided for the failure.

" }, "details":{ "shape":"Data", - "documentation":"

The details of the failure (if any).

" + "documentation":"

The details of the failure.

" }, "scheduledEventId":{ "shape":"EventId", @@ -935,7 +581,7 @@ "documentation":"

The ID of the ActivityTaskStarted event recorded when this activity task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ActivityTaskFailed event.

" + "documentation":"

Provides the details of the ActivityTaskFailed event.

" }, "ActivityTaskScheduledEventAttributes":{ "type":"structure", @@ -960,7 +606,7 @@ }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks. This data is not sent to the activity.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent workflow tasks. This data isn't sent to the activity.

" }, "scheduleToStartTimeout":{ "shape":"DurationInSecondsOptional", @@ -980,7 +626,7 @@ }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. The priority to assign to the scheduled activity task. If set, this will override any default priority value that was assigned when the activity type was registered.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The priority to assign to the scheduled activity task. If set, this overrides any default priority value that was assigned when the activity type was registered.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", @@ -988,10 +634,10 @@ }, "heartbeatTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum time before which the worker processing this task must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or return a result, it will be ignored.

" + "documentation":"

The maximum time before which the worker processing this task must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or return a result, it is ignored.

" } }, - "documentation":"

Provides details of the ActivityTaskScheduled event.

" + "documentation":"

Provides the details of the ActivityTaskScheduled event.

" }, "ActivityTaskStartedEventAttributes":{ "type":"structure", @@ -1006,7 +652,7 @@ "documentation":"

The ID of the ActivityTaskScheduled event that was recorded when this activity task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ActivityTaskStarted event.

" + "documentation":"

Provides the details of the ActivityTaskStarted event.

" }, "ActivityTaskStatus":{ "type":"structure", @@ -1044,7 +690,7 @@ "documentation":"

Contains the content of the details parameter for the last call made by the activity to RecordActivityTaskHeartbeat.

" } }, - "documentation":"

Provides details of the ActivityTaskTimedOut event.

" + "documentation":"

Provides the details of the ActivityTaskTimedOut event.

" }, "ActivityTaskTimeoutType":{ "type":"string", @@ -1064,11 +710,11 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

The name of this activity.

The combination of activity type name and version must be unique within a domain." + "documentation":"

The name of this activity.

The combination of activity type name and version must be unique within a domain.

" }, "version":{ "shape":"Version", - "documentation":"

The version of this activity.

The combination of activity type name and version must be unique with in a domain." + "documentation":"

The version of this activity.

The combination of activity type name and version must be unique with in a domain.

" } }, "documentation":"

Represents an activity type.

" @@ -1078,27 +724,27 @@ "members":{ "defaultTaskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum duration for tasks of an activity type specified when registering the activity type. You can override this default when scheduling a task through the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum duration for tasks of an activity type specified when registering the activity type. You can override this default when scheduling a task through the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskHeartbeatTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum time, in seconds, before which a worker processing a task must report progress by calling RecordActivityTaskHeartbeat.

You can specify this value only when registering an activity type. The registered default value can be overridden when you schedule a task through the ScheduleActivityTask decision. If the activity worker subsequently attempts to record a heartbeat or returns a result, the activity worker receives an UnknownResource fault. In this case, Amazon SWF no longer considers the activity task to be valid; the activity worker should clean up the activity task.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum time, in seconds, before which a worker processing a task must report progress by calling RecordActivityTaskHeartbeat.

You can specify this value only when registering an activity type. The registered default value can be overridden when you schedule a task through the ScheduleActivityTask Decision. If the activity worker subsequently attempts to record a heartbeat or returns a result, the activity worker receives an UnknownResource fault. In this case, Amazon SWF no longer considers the activity task to be valid; the activity worker should clean up the activity task.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskList":{ "shape":"TaskList", - "documentation":"

Optional. The default task list specified for this activity type at registration. This default is used if a task list is not provided when a task is scheduled through the ScheduleActivityTask decision. You can override the default registered task list when scheduling a task through the ScheduleActivityTask decision.

" + "documentation":"

The default task list specified for this activity type at registration. This default is used if a task list isn't provided when a task is scheduled through the ScheduleActivityTask Decision. You can override the default registered task list when scheduling a task through the ScheduleActivityTask Decision.

" }, "defaultTaskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. The default task priority for tasks of this activity type, specified at registration. If not set, then \"0\" will be used as the default priority. This default can be overridden when scheduling an activity task.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The default task priority for tasks of this activity type, specified at registration. If not set, then 0 is used as the default priority. This default can be overridden when scheduling an activity task.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "defaultTaskScheduleToStartTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum duration, specified when registering the activity type, that a task of an activity type can wait before being assigned to a worker. You can override this default when scheduling a task through the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum duration, specified when registering the activity type, that a task of an activity type can wait before being assigned to a worker. You can override this default when scheduling a task through the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskScheduleToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum duration, specified when registering the activity type, for tasks of this activity type. You can override this default when scheduling a task through the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum duration, specified when registering the activity type, for tasks of this activity type. You can override this default when scheduling a task through the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" } }, "documentation":"

Configuration settings registered with the activity type.

" @@ -1112,7 +758,7 @@ "members":{ "typeInfo":{ "shape":"ActivityTypeInfo", - "documentation":"

General information about the activity type.

The status of activity type (returned in the ActivityTypeInfo structure) can be one of the following.

" + "documentation":"

General information about the activity type.

The status of activity type (returned in the ActivityTypeInfo structure) can be one of the following.

" }, "configuration":{ "shape":"ActivityTypeConfiguration", @@ -1173,8 +819,8 @@ }, "Arn":{ "type":"string", - "min":1, - "max":1224 + "max":1600, + "min":1 }, "CancelTimerDecisionAttributes":{ "type":"structure", @@ -1182,10 +828,10 @@ "members":{ "timerId":{ "shape":"TimerId", - "documentation":"

Required. The unique ID of the timer to cancel.

" + "documentation":"

The unique ID of the timer to cancel.

" } }, - "documentation":"

Provides details of the CancelTimer decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the CancelTimer decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CancelTimerFailedCause":{ "type":"string", @@ -1208,24 +854,24 @@ }, "cause":{ "shape":"CancelTimerFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CancelTimer decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the CancelTimerFailed event.

" + "documentation":"

Provides the details of the CancelTimerFailed event.

" }, "CancelWorkflowExecutionDecisionAttributes":{ "type":"structure", "members":{ "details":{ "shape":"Data", - "documentation":"

Optional. details of the cancellation.

" + "documentation":"

Details of the cancellation.

" } }, - "documentation":"

Provides details of the CancelWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the CancelWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CancelWorkflowExecutionFailedCause":{ "type":"string", @@ -1243,14 +889,14 @@ "members":{ "cause":{ "shape":"CancelWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CancelWorkflowExecution decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the CancelWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the CancelWorkflowExecutionFailed event.

" }, "Canceled":{"type":"boolean"}, "CauseMessage":{ @@ -1288,7 +934,7 @@ }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", @@ -1316,18 +962,18 @@ }, "result":{ "shape":"Data", - "documentation":"

The result of the child workflow execution (if any).

" + "documentation":"

The result of the child workflow execution.

" }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", "documentation":"

The ID of the ChildWorkflowExecutionStarted event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ChildWorkflowExecutionCompleted event.

" + "documentation":"

Provides the details of the ChildWorkflowExecutionCompleted event.

" }, "ChildWorkflowExecutionFailedEventAttributes":{ "type":"structure", @@ -1356,14 +1002,14 @@ }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", "documentation":"

The ID of the ChildWorkflowExecutionStarted event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ChildWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the ChildWorkflowExecutionFailed event.

" }, "ChildWorkflowExecutionStartedEventAttributes":{ "type":"structure", @@ -1379,14 +1025,14 @@ }, "workflowType":{ "shape":"WorkflowType", - "documentation":"

The type of the child workflow execution.

" + "documentation":"

The type of the child workflow execution.

" }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ChildWorkflowExecutionStarted event.

" + "documentation":"

Provides the details of the ChildWorkflowExecutionStarted event.

" }, "ChildWorkflowExecutionTerminatedEventAttributes":{ "type":"structure", @@ -1407,14 +1053,14 @@ }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", "documentation":"

The ID of the ChildWorkflowExecutionStarted event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ChildWorkflowExecutionTerminated event.

" + "documentation":"

Provides the details of the ChildWorkflowExecutionTerminated event.

" }, "ChildWorkflowExecutionTimedOutEventAttributes":{ "type":"structure", @@ -1440,14 +1086,14 @@ }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", "documentation":"

The ID of the ChildWorkflowExecutionStarted event recorded when this child workflow execution was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ChildWorkflowExecutionTimedOut event.

" + "documentation":"

Provides the details of the ChildWorkflowExecutionTimedOut event.

" }, "CloseStatus":{ "type":"string", @@ -1466,7 +1112,7 @@ "members":{ "status":{ "shape":"CloseStatus", - "documentation":"

Required. The close status that must match the close status of an execution for it to meet the criteria of this filter.

" + "documentation":"

The close status that must match the close status of an execution for it to meet the criteria of this filter.

" } }, "documentation":"

Used to filter the closed workflow executions in visibility APIs by their close status.

" @@ -1479,7 +1125,7 @@ "documentation":"

The result of the workflow execution. The form of the result is implementation defined.

" } }, - "documentation":"

Provides details of the CompleteWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the CompleteWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "CompleteWorkflowExecutionFailedCause":{ "type":"string", @@ -1497,14 +1143,14 @@ "members":{ "cause":{ "shape":"CompleteWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CompleteWorkflowExecution decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the CompleteWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the CompleteWorkflowExecutionFailed event.

" }, "ContinueAsNewWorkflowExecutionDecisionAttributes":{ "type":"structure", @@ -1515,32 +1161,38 @@ }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

An execution start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this field. If neither this field is set nor a default execution start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

An execution start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this field. If neither this field is set nor a default execution start-to-close timeout was specified at registration time then a fault is returned.

" + }, + "taskList":{ + "shape":"TaskList", + "documentation":"

The task list to use for the decisions of the new (continued) workflow execution.

" }, - "taskList":{"shape":"TaskList"}, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. The task priority that, if set, specifies the priority for the decision tasks for this workflow execution. This overrides the defaultTaskPriority specified when registering the workflow type. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The task priority that, if set, specifies the priority for the decision tasks for this workflow execution. This overrides the defaultTaskPriority specified when registering the workflow type. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Specifies the maximum duration of decision tasks for the new workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A task start-to-close timeout for the new workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

Specifies the maximum duration of decision tasks for the new workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A task start-to-close timeout for the new workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault is returned.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

If set, specifies the policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault is returned.

" }, "tagList":{ "shape":"TagList", "documentation":"

The list of tags to associate with the new workflow execution. A maximum of 5 tags can be specified. You can list workflow executions with a specific tag by calling ListOpenWorkflowExecutions or ListClosedWorkflowExecutions and specifying a TagFilter.

" }, - "workflowTypeVersion":{"shape":"Version"}, + "workflowTypeVersion":{ + "shape":"Version", + "documentation":"

The version of the workflow to start.

" + }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The ARN of an IAM role that authorizes Amazon SWF to invoke AWS Lambda functions.

In order for this workflow execution to invoke AWS Lambda functions, an appropriate IAM role must be specified either as a default for the workflow type or through this field." + "documentation":"

The IAM role to attach to the new (continued) execution.

" } }, - "documentation":"

Provides details of the ContinueAsNewWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the ContinueAsNewWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ContinueAsNewWorkflowExecutionFailedCause":{ "type":"string", @@ -1565,14 +1217,14 @@ "members":{ "cause":{ "shape":"ContinueAsNewWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the ContinueAsNewWorkflowExecution decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ContinueAsNewWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the ContinueAsNewWorkflowExecutionFailed event.

" }, "Count":{ "type":"integer", @@ -1588,27 +1240,27 @@ }, "startTimeFilter":{ "shape":"ExecutionTimeFilter", - "documentation":"

If specified, only workflow executions that meet the start time criteria of the filter are counted.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both." + "documentation":"

If specified, only workflow executions that meet the start time criteria of the filter are counted.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both.

" }, "closeTimeFilter":{ "shape":"ExecutionTimeFilter", - "documentation":"

If specified, only workflow executions that meet the close time criteria of the filter are counted.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both." + "documentation":"

If specified, only workflow executions that meet the close time criteria of the filter are counted.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both.

" }, "executionFilter":{ "shape":"WorkflowExecutionFilter", - "documentation":"

If specified, only workflow executions matching the WorkflowId in the filter are counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions matching the WorkflowId in the filter are counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "typeFilter":{ "shape":"WorkflowTypeFilter", - "documentation":"

If specified, indicates the type of the workflow executions to be counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, indicates the type of the workflow executions to be counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "tagFilter":{ "shape":"TagFilter", - "documentation":"

If specified, only executions that have a tag that matches the filter are counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions that have a tag that matches the filter are counted.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "closeStatusFilter":{ "shape":"CloseStatusFilter", - "documentation":"

If specified, only workflow executions that match this close status are counted. This filter has an affect only if executionStatus is specified as CLOSED.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions that match this close status are counted. This filter has an affect only if executionStatus is specified as CLOSED.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" } } }, @@ -1629,15 +1281,15 @@ }, "typeFilter":{ "shape":"WorkflowTypeFilter", - "documentation":"

Specifies the type of the workflow executions to be counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

Specifies the type of the workflow executions to be counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "tagFilter":{ "shape":"TagFilter", - "documentation":"

If specified, only executions that have a tag that matches the filter are counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions that have a tag that matches the filter are counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "executionFilter":{ "shape":"WorkflowExecutionFilter", - "documentation":"

If specified, only workflow executions matching the WorkflowId in the filter are counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions matching the WorkflowId in the filter are counted.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" } } }, @@ -1689,55 +1341,58 @@ }, "scheduleActivityTaskDecisionAttributes":{ "shape":"ScheduleActivityTaskDecisionAttributes", - "documentation":"

Provides details of the ScheduleActivityTask decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the ScheduleActivityTask decision. It isn't set for other decision types.

" }, "requestCancelActivityTaskDecisionAttributes":{ "shape":"RequestCancelActivityTaskDecisionAttributes", - "documentation":"

Provides details of the RequestCancelActivityTask decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the RequestCancelActivityTask decision. It isn't set for other decision types.

" }, "completeWorkflowExecutionDecisionAttributes":{ "shape":"CompleteWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the CompleteWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the CompleteWorkflowExecution decision. It isn't set for other decision types.

" }, "failWorkflowExecutionDecisionAttributes":{ "shape":"FailWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the FailWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the FailWorkflowExecution decision. It isn't set for other decision types.

" }, "cancelWorkflowExecutionDecisionAttributes":{ "shape":"CancelWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the CancelWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the CancelWorkflowExecution decision. It isn't set for other decision types.

" }, "continueAsNewWorkflowExecutionDecisionAttributes":{ "shape":"ContinueAsNewWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the ContinueAsNewWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the ContinueAsNewWorkflowExecution decision. It isn't set for other decision types.

" }, "recordMarkerDecisionAttributes":{ "shape":"RecordMarkerDecisionAttributes", - "documentation":"

Provides details of the RecordMarker decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the RecordMarker decision. It isn't set for other decision types.

" }, "startTimerDecisionAttributes":{ "shape":"StartTimerDecisionAttributes", - "documentation":"

Provides details of the StartTimer decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the StartTimer decision. It isn't set for other decision types.

" }, "cancelTimerDecisionAttributes":{ "shape":"CancelTimerDecisionAttributes", - "documentation":"

Provides details of the CancelTimer decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the CancelTimer decision. It isn't set for other decision types.

" }, "signalExternalWorkflowExecutionDecisionAttributes":{ "shape":"SignalExternalWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the SignalExternalWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the SignalExternalWorkflowExecution decision. It isn't set for other decision types.

" }, "requestCancelExternalWorkflowExecutionDecisionAttributes":{ "shape":"RequestCancelExternalWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the RequestCancelExternalWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the RequestCancelExternalWorkflowExecution decision. It isn't set for other decision types.

" }, "startChildWorkflowExecutionDecisionAttributes":{ "shape":"StartChildWorkflowExecutionDecisionAttributes", - "documentation":"

Provides details of the StartChildWorkflowExecution decision. It is not set for other decision types.

" + "documentation":"

Provides the details of the StartChildWorkflowExecution decision. It isn't set for other decision types.

" }, - "scheduleLambdaFunctionDecisionAttributes":{"shape":"ScheduleLambdaFunctionDecisionAttributes"} + "scheduleLambdaFunctionDecisionAttributes":{ + "shape":"ScheduleLambdaFunctionDecisionAttributes", + "documentation":"

Provides the details of the ScheduleLambdaFunction decision. It isn't set for other decision types.

" + } }, - "documentation":"

Specifies a decision made by the decider. A decision can be one of these types:

Access Control

If you grant permission to use RespondDecisionTaskCompleted, you can use IAM policies to express permissions for the list of decisions returned by this action as if they were members of the API. Treating decisions as a pseudo API maintains a uniform conceptual model and helps keep policies readable. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

Decision Failure

Decisions can fail for several reasons

One of the following events might be added to the history to indicate an error. The event attribute's cause parameter indicates the cause. If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

The preceding error events might occur due to an error in the decider logic, which might put the workflow execution in an unstable state The cause field in the event structure for the error event indicates the cause of the error.

A workflow execution may be closed by the decider by returning one of the following decisions when completing a decision task: CompleteWorkflowExecution, FailWorkflowExecution, CancelWorkflowExecution and ContinueAsNewWorkflowExecution. An UnhandledDecision fault will be returned if a workflow closing decision is specified and a signal or activity event had been added to the history while the decision task was being performed by the decider. Unlike the above situations which are logic issues, this fault is always possible because of race conditions in a distributed system. The right action here is to call RespondDecisionTaskCompleted without any decisions. This would result in another decision task with these new events included in the history. The decider should handle the new events and may decide to close the workflow execution.

How to code a decision

You code a decision by first setting the decision type field to one of the above decision values, and then set the corresponding attributes field shown below:

" + "documentation":"

Specifies a decision made by the decider. A decision can be one of these types:

Access Control

If you grant permission to use RespondDecisionTaskCompleted, you can use IAM policies to express permissions for the list of decisions returned by this action as if they were members of the API. Treating decisions as a pseudo API maintains a uniform conceptual model and helps keep policies readable. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

Decision Failure

Decisions can fail for several reasons

One of the following events might be added to the history to indicate an error. The event attribute's cause parameter indicates the cause. If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

The preceding error events might occur due to an error in the decider logic, which might put the workflow execution in an unstable state The cause field in the event structure for the error event indicates the cause of the error.

A workflow execution may be closed by the decider by returning one of the following decisions when completing a decision task: CompleteWorkflowExecution, FailWorkflowExecution, CancelWorkflowExecution and ContinueAsNewWorkflowExecution. An UnhandledDecision fault is returned if a workflow closing decision is specified and a signal or activity event had been added to the history while the decision task was being performed by the decider. Unlike the above situations which are logic issues, this fault is always possible because of race conditions in a distributed system. The right action here is to call RespondDecisionTaskCompleted without any decisions. This would result in another decision task with these new events included in the history. The decider should handle the new events and may decide to close the workflow execution.

How to Code a Decision

You code a decision by first setting the decision type field to one of the above decision values, and then set the corresponding attributes field shown below:

" }, "DecisionList":{ "type":"list", @@ -1804,7 +1459,7 @@ "documentation":"

The ID of the DecisionTaskStarted event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the DecisionTaskCompleted event.

" + "documentation":"

Provides the details of the DecisionTaskCompleted event.

" }, "DecisionTaskScheduledEventAttributes":{ "type":"structure", @@ -1816,11 +1471,11 @@ }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. A task priority that, if set, specifies the priority for this decision task. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

A task priority that, if set, specifies the priority for this decision task. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "startToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration for this decision task. The task is considered timed out if it does not completed within this duration.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration for this decision task. The task is considered timed out if it doesn't completed within this duration.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" } }, "documentation":"

Provides details about the DecisionTaskScheduled event.

" @@ -1838,7 +1493,7 @@ "documentation":"

The ID of the DecisionTaskScheduled event that was recorded when this decision task was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the DecisionTaskStarted event.

" + "documentation":"

Provides the details of the DecisionTaskStarted event.

" }, "DecisionTaskTimedOutEventAttributes":{ "type":"structure", @@ -1861,7 +1516,7 @@ "documentation":"

The ID of the DecisionTaskStarted event recorded when this decision task was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the DecisionTaskTimedOut event.

" + "documentation":"

Provides the details of the DecisionTaskTimedOut event.

" }, "DecisionTaskTimeoutType":{ "type":"string", @@ -1890,6 +1545,7 @@ "members":{ "message":{"shape":"ErrorMessage"} }, + "documentation":"

The StartWorkflowExecution API action was called without the required parameters set.

Some workflow execution parameters, such as the decision taskList, must be set to start the execution. However, these parameters might have been set as defaults when the workflow type was registered. In this case, you can omit these parameters from the StartWorkflowExecution call and Amazon SWF uses the values defined in the workflow type.

If these parameters aren't set and no default parameters were defined in the workflow type, this error is displayed.

", "exception":true }, "DeprecateActivityTypeInput":{ @@ -2009,8 +1665,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned if the specified domain already exists. You will get this fault even if the existing domain is in deprecated status.

" + "documentation":"

Returned if the specified domain already exists. You get this fault even if the existing domain is in deprecated status.

", + "exception":true }, "DomainConfiguration":{ "type":"structure", @@ -2031,8 +1687,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned when the specified domain has been deprecated.

" + "documentation":"

Returned when the specified domain has been deprecated.

", + "exception":true }, "DomainDetail":{ "type":"structure", @@ -2041,8 +1697,14 @@ "configuration" ], "members":{ - "domainInfo":{"shape":"DomainInfo"}, - "configuration":{"shape":"DomainConfiguration"} + "domainInfo":{ + "shape":"DomainInfo", + "documentation":"

The basic information about a domain, such as its name, status, and description.

" + }, + "configuration":{ + "shape":"DomainConfiguration", + "documentation":"

The domain configuration. Currently, this includes only the domain's retention period.

" + } }, "documentation":"

Contains details of a domain.

" }, @@ -2059,7 +1721,7 @@ }, "status":{ "shape":"RegistrationStatus", - "documentation":"

The status of the domain:

" + "documentation":"

The status of the domain:

" }, "description":{ "shape":"Description", @@ -2089,18 +1751,18 @@ }, "DomainName":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "DurationInDays":{ "type":"string", - "min":1, - "max":8 + "max":8, + "min":1 }, "DurationInSeconds":{ "type":"string", - "min":1, - "max":8 + "max":8, + "min":1 }, "DurationInSecondsOptional":{ "type":"string", @@ -2187,7 +1849,7 @@ "documentation":"

Specifies the latest start or close date and time to return.

" } }, - "documentation":"

Used to filter the workflow executions in visibility APIs by various time-based rules. Each parameter, if specified, defines a rule that must be satisfied by each returned query result. The parameter values are in the Unix Time format. For example: \"oldestDate\": 1325376070.

" + "documentation":"

Used to filter the workflow executions in visibility APIs by various time-based rules. Each parameter, if specified, defines a rule that must be satisfied by each returned query result. The parameter values are in the Unix Time format. For example: \"oldestDate\": 1325376070.

" }, "ExternalWorkflowExecutionCancelRequestedEventAttributes":{ "type":"structure", @@ -2205,7 +1867,7 @@ "documentation":"

The ID of the RequestCancelExternalWorkflowExecutionInitiated event corresponding to the RequestCancelExternalWorkflowExecution decision to cancel this external workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ExternalWorkflowExecutionCancelRequested event.

" + "documentation":"

Provides the details of the ExternalWorkflowExecutionCancelRequested event.

" }, "ExternalWorkflowExecutionSignaledEventAttributes":{ "type":"structure", @@ -2216,14 +1878,14 @@ "members":{ "workflowExecution":{ "shape":"WorkflowExecution", - "documentation":"

The external workflow execution that the signal was delivered to.

" + "documentation":"

The external workflow execution that the signal was delivered to.

" }, "initiatedEventId":{ "shape":"EventId", "documentation":"

The ID of the SignalExternalWorkflowExecutionInitiated event corresponding to the SignalExternalWorkflowExecution decision to request this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ExternalWorkflowExecutionSignaled event.

" + "documentation":"

Provides the details of the ExternalWorkflowExecutionSignaled event.

" }, "FailWorkflowExecutionDecisionAttributes":{ "type":"structure", @@ -2234,10 +1896,10 @@ }, "details":{ "shape":"Data", - "documentation":"

Optional. Details of the failure.

" + "documentation":"

Details of the failure.

" } }, - "documentation":"

Provides details of the FailWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the FailWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "FailWorkflowExecutionFailedCause":{ "type":"string", @@ -2255,14 +1917,14 @@ "members":{ "cause":{ "shape":"FailWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the FailWorkflowExecution decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the FailWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the FailWorkflowExecutionFailed event.

" }, "FailureReason":{ "type":"string", @@ -2270,18 +1932,18 @@ }, "FunctionId":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "FunctionInput":{ "type":"string", - "min":1, - "max":32768 + "max":32768, + "min":0 }, "FunctionName":{ "type":"string", - "min":1, - "max":64 + "max":64, + "min":1 }, "GetWorkflowExecutionHistoryInput":{ "type":"structure", @@ -2304,7 +1966,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2349,201 +2011,222 @@ }, "workflowExecutionStartedEventAttributes":{ "shape":"WorkflowExecutionStartedEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionStarted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionStarted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionCompletedEventAttributes":{ "shape":"WorkflowExecutionCompletedEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionCompleted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionCompleted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "completeWorkflowExecutionFailedEventAttributes":{ "shape":"CompleteWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type CompleteWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type CompleteWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionFailedEventAttributes":{ "shape":"WorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "failWorkflowExecutionFailedEventAttributes":{ "shape":"FailWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type FailWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type FailWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionTimedOutEventAttributes":{ "shape":"WorkflowExecutionTimedOutEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionTimedOut then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionTimedOut then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionCanceledEventAttributes":{ "shape":"WorkflowExecutionCanceledEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionCanceled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionCanceled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "cancelWorkflowExecutionFailedEventAttributes":{ "shape":"CancelWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type CancelWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type CancelWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionContinuedAsNewEventAttributes":{ "shape":"WorkflowExecutionContinuedAsNewEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionContinuedAsNew then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionContinuedAsNew then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "continueAsNewWorkflowExecutionFailedEventAttributes":{ "shape":"ContinueAsNewWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type ContinueAsNewWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ContinueAsNewWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionTerminatedEventAttributes":{ "shape":"WorkflowExecutionTerminatedEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionTerminated then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionTerminated then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionCancelRequestedEventAttributes":{ "shape":"WorkflowExecutionCancelRequestedEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionCancelRequested then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionCancelRequested then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "decisionTaskScheduledEventAttributes":{ "shape":"DecisionTaskScheduledEventAttributes", - "documentation":"

If the event is of type DecisionTaskScheduled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type DecisionTaskScheduled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "decisionTaskStartedEventAttributes":{ "shape":"DecisionTaskStartedEventAttributes", - "documentation":"

If the event is of type DecisionTaskStarted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type DecisionTaskStarted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "decisionTaskCompletedEventAttributes":{ "shape":"DecisionTaskCompletedEventAttributes", - "documentation":"

If the event is of type DecisionTaskCompleted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type DecisionTaskCompleted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "decisionTaskTimedOutEventAttributes":{ "shape":"DecisionTaskTimedOutEventAttributes", - "documentation":"

If the event is of type DecisionTaskTimedOut then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type DecisionTaskTimedOut then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskScheduledEventAttributes":{ "shape":"ActivityTaskScheduledEventAttributes", - "documentation":"

If the event is of type ActivityTaskScheduled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskScheduled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskStartedEventAttributes":{ "shape":"ActivityTaskStartedEventAttributes", - "documentation":"

If the event is of type ActivityTaskStarted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskStarted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskCompletedEventAttributes":{ "shape":"ActivityTaskCompletedEventAttributes", - "documentation":"

If the event is of type ActivityTaskCompleted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskCompleted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskFailedEventAttributes":{ "shape":"ActivityTaskFailedEventAttributes", - "documentation":"

If the event is of type ActivityTaskFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskTimedOutEventAttributes":{ "shape":"ActivityTaskTimedOutEventAttributes", - "documentation":"

If the event is of type ActivityTaskTimedOut then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskTimedOut then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskCanceledEventAttributes":{ "shape":"ActivityTaskCanceledEventAttributes", - "documentation":"

If the event is of type ActivityTaskCanceled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskCanceled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "activityTaskCancelRequestedEventAttributes":{ "shape":"ActivityTaskCancelRequestedEventAttributes", - "documentation":"

If the event is of type ActivityTaskcancelRequested then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ActivityTaskcancelRequested then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "workflowExecutionSignaledEventAttributes":{ "shape":"WorkflowExecutionSignaledEventAttributes", - "documentation":"

If the event is of type WorkflowExecutionSignaled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type WorkflowExecutionSignaled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "markerRecordedEventAttributes":{ "shape":"MarkerRecordedEventAttributes", - "documentation":"

If the event is of type MarkerRecorded then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type MarkerRecorded then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "recordMarkerFailedEventAttributes":{ "shape":"RecordMarkerFailedEventAttributes", - "documentation":"

If the event is of type DecisionTaskFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type DecisionTaskFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "timerStartedEventAttributes":{ "shape":"TimerStartedEventAttributes", - "documentation":"

If the event is of type TimerStarted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type TimerStarted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "timerFiredEventAttributes":{ "shape":"TimerFiredEventAttributes", - "documentation":"

If the event is of type TimerFired then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type TimerFired then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "timerCanceledEventAttributes":{ "shape":"TimerCanceledEventAttributes", - "documentation":"

If the event is of type TimerCanceled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type TimerCanceled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "startChildWorkflowExecutionInitiatedEventAttributes":{ "shape":"StartChildWorkflowExecutionInitiatedEventAttributes", - "documentation":"

If the event is of type StartChildWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type StartChildWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionStartedEventAttributes":{ "shape":"ChildWorkflowExecutionStartedEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionStarted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionStarted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionCompletedEventAttributes":{ "shape":"ChildWorkflowExecutionCompletedEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionCompleted then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionCompleted then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionFailedEventAttributes":{ "shape":"ChildWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionTimedOutEventAttributes":{ "shape":"ChildWorkflowExecutionTimedOutEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionTimedOut then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionTimedOut then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionCanceledEventAttributes":{ "shape":"ChildWorkflowExecutionCanceledEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionCanceled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionCanceled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "childWorkflowExecutionTerminatedEventAttributes":{ "shape":"ChildWorkflowExecutionTerminatedEventAttributes", - "documentation":"

If the event is of type ChildWorkflowExecutionTerminated then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ChildWorkflowExecutionTerminated then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "signalExternalWorkflowExecutionInitiatedEventAttributes":{ "shape":"SignalExternalWorkflowExecutionInitiatedEventAttributes", - "documentation":"

If the event is of type SignalExternalWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type SignalExternalWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "externalWorkflowExecutionSignaledEventAttributes":{ "shape":"ExternalWorkflowExecutionSignaledEventAttributes", - "documentation":"

If the event is of type ExternalWorkflowExecutionSignaled then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ExternalWorkflowExecutionSignaled then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "signalExternalWorkflowExecutionFailedEventAttributes":{ "shape":"SignalExternalWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type SignalExternalWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type SignalExternalWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "externalWorkflowExecutionCancelRequestedEventAttributes":{ "shape":"ExternalWorkflowExecutionCancelRequestedEventAttributes", - "documentation":"

If the event is of type ExternalWorkflowExecutionCancelRequested then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ExternalWorkflowExecutionCancelRequested then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "requestCancelExternalWorkflowExecutionInitiatedEventAttributes":{ "shape":"RequestCancelExternalWorkflowExecutionInitiatedEventAttributes", - "documentation":"

If the event is of type RequestCancelExternalWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type RequestCancelExternalWorkflowExecutionInitiated then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "requestCancelExternalWorkflowExecutionFailedEventAttributes":{ "shape":"RequestCancelExternalWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type RequestCancelExternalWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type RequestCancelExternalWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "scheduleActivityTaskFailedEventAttributes":{ "shape":"ScheduleActivityTaskFailedEventAttributes", - "documentation":"

If the event is of type ScheduleActivityTaskFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type ScheduleActivityTaskFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "requestCancelActivityTaskFailedEventAttributes":{ "shape":"RequestCancelActivityTaskFailedEventAttributes", - "documentation":"

If the event is of type RequestCancelActivityTaskFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type RequestCancelActivityTaskFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "startTimerFailedEventAttributes":{ "shape":"StartTimerFailedEventAttributes", - "documentation":"

If the event is of type StartTimerFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type StartTimerFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "cancelTimerFailedEventAttributes":{ "shape":"CancelTimerFailedEventAttributes", - "documentation":"

If the event is of type CancelTimerFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type CancelTimerFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" }, "startChildWorkflowExecutionFailedEventAttributes":{ "shape":"StartChildWorkflowExecutionFailedEventAttributes", - "documentation":"

If the event is of type StartChildWorkflowExecutionFailed then this member is set and provides detailed information about the event. It is not set for other event types.

" + "documentation":"

If the event is of type StartChildWorkflowExecutionFailed then this member is set and provides detailed information about the event. It isn't set for other event types.

" + }, + "lambdaFunctionScheduledEventAttributes":{ + "shape":"LambdaFunctionScheduledEventAttributes", + "documentation":"

Provides the details of the LambdaFunctionScheduled event. It isn't set for other event types.

" + }, + "lambdaFunctionStartedEventAttributes":{ + "shape":"LambdaFunctionStartedEventAttributes", + "documentation":"

Provides the details of the LambdaFunctionStarted event. It isn't set for other event types.

" + }, + "lambdaFunctionCompletedEventAttributes":{ + "shape":"LambdaFunctionCompletedEventAttributes", + "documentation":"

Provides the details of the LambdaFunctionCompleted event. It isn't set for other event types.

" + }, + "lambdaFunctionFailedEventAttributes":{ + "shape":"LambdaFunctionFailedEventAttributes", + "documentation":"

Provides the details of the LambdaFunctionFailed event. It isn't set for other event types.

" }, - "lambdaFunctionScheduledEventAttributes":{"shape":"LambdaFunctionScheduledEventAttributes"}, - "lambdaFunctionStartedEventAttributes":{"shape":"LambdaFunctionStartedEventAttributes"}, - "lambdaFunctionCompletedEventAttributes":{"shape":"LambdaFunctionCompletedEventAttributes"}, - "lambdaFunctionFailedEventAttributes":{"shape":"LambdaFunctionFailedEventAttributes"}, - "lambdaFunctionTimedOutEventAttributes":{"shape":"LambdaFunctionTimedOutEventAttributes"}, - "scheduleLambdaFunctionFailedEventAttributes":{"shape":"ScheduleLambdaFunctionFailedEventAttributes"}, - "startLambdaFunctionFailedEventAttributes":{"shape":"StartLambdaFunctionFailedEventAttributes"} + "lambdaFunctionTimedOutEventAttributes":{ + "shape":"LambdaFunctionTimedOutEventAttributes", + "documentation":"

Provides the details of the LambdaFunctionTimedOut event. It isn't set for other event types.

" + }, + "scheduleLambdaFunctionFailedEventAttributes":{ + "shape":"ScheduleLambdaFunctionFailedEventAttributes", + "documentation":"

Provides the details of the ScheduleLambdaFunctionFailed event. It isn't set for other event types.

" + }, + "startLambdaFunctionFailedEventAttributes":{ + "shape":"StartLambdaFunctionFailedEventAttributes", + "documentation":"

Provides the details of the StartLambdaFunctionFailed event. It isn't set for other event types.

" + } }, - "documentation":"

Event within a workflow execution. A history event can be one of these types:

" + "documentation":"

Event within a workflow execution. A history event can be one of these types:

" }, "HistoryEventList":{ "type":"list", @@ -2562,18 +2245,18 @@ "members":{ "scheduledEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this AWS Lambda function was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this Lambda task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionStarted event recorded in the history.

" + "documentation":"

The ID of the LambdaFunctionStarted event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "result":{ "shape":"Data", - "documentation":"

The result of the function execution (if any).

" + "documentation":"

The results of the Lambda task.

" } }, - "documentation":"

Provides details for the LambdaFunctionCompleted event.

" + "documentation":"

Provides the details of the LambdaFunctionCompleted event. It isn't set for other event types.

" }, "LambdaFunctionFailedEventAttributes":{ "type":"structure", @@ -2584,22 +2267,22 @@ "members":{ "scheduledEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this AWS Lambda function was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionStarted event recorded in the history.

" + "documentation":"

The ID of the LambdaFunctionStarted event recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "reason":{ "shape":"FailureReason", - "documentation":"

The reason provided for the failure (if any).

" + "documentation":"

The reason provided for the failure.

" }, "details":{ "shape":"Data", - "documentation":"

The details of the failure (if any).

" + "documentation":"

The details of the failure.

" } }, - "documentation":"

Provides details for the LambdaFunctionFailed event.

" + "documentation":"

Provides the details of the LambdaFunctionFailed event. It isn't set for other event types.

" }, "LambdaFunctionScheduledEventAttributes":{ "type":"structure", @@ -2611,26 +2294,30 @@ "members":{ "id":{ "shape":"FunctionId", - "documentation":"

The unique Amazon SWF ID for the AWS Lambda task.

" + "documentation":"

The unique ID of the Lambda task.

" }, "name":{ "shape":"FunctionName", - "documentation":"

The name of the scheduled AWS Lambda function.

" + "documentation":"

The name of the Lambda function.

" + }, + "control":{ + "shape":"Data", + "documentation":"

Data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the Lambda task.

" }, "input":{ "shape":"FunctionInput", - "documentation":"

Input provided to the AWS Lambda function.

" + "documentation":"

The input provided to the Lambda task.

" }, "startToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum time, in seconds, that the AWS Lambda function can take to execute from start to close before it is marked as failed.

" + "documentation":"

The maximum amount of time a worker can take to process the Lambda task.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", - "documentation":"

The ID of the DecisionTaskCompleted event for the decision that resulted in the scheduling of this AWS Lambda function. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionCompleted event corresponding to the decision that resulted in scheduling this activity task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details for the LambdaFunctionScheduled event.

" + "documentation":"

Provides the details of the LambdaFunctionScheduled event. It isn't set for other event types.

" }, "LambdaFunctionStartedEventAttributes":{ "type":"structure", @@ -2638,10 +2325,10 @@ "members":{ "scheduledEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this AWS Lambda function was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details for the LambdaFunctionStarted event.

" + "documentation":"

Provides the details of the LambdaFunctionStarted event. It isn't set for other event types.

" }, "LambdaFunctionTimedOutEventAttributes":{ "type":"structure", @@ -2652,18 +2339,18 @@ "members":{ "scheduledEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this AWS Lambda function was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "startedEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionStarted event recorded in the history.

" + "documentation":"

The ID of the ActivityTaskStarted event that was recorded when this activity task started. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "timeoutType":{ "shape":"LambdaFunctionTimeoutType", "documentation":"

The type of the timeout that caused this event.

" } }, - "documentation":"

Provides details for the LambdaFunctionTimedOut event.

" + "documentation":"

Provides details of the LambdaFunctionTimedOut event.

" }, "LambdaFunctionTimeoutType":{ "type":"string", @@ -2677,8 +2364,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

" + "documentation":"

Returned by any operation if a system imposed limitation has been reached. To address this fault you should either clean up unused resources or increase the limit by contacting AWS.

", + "exception":true }, "LimitedData":{ "type":"string", @@ -2709,7 +2396,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2727,27 +2414,27 @@ }, "startTimeFilter":{ "shape":"ExecutionTimeFilter", - "documentation":"

If specified, the workflow executions are included in the returned results based on whether their start times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their start times.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both." + "documentation":"

If specified, the workflow executions are included in the returned results based on whether their start times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their start times.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both.

" }, "closeTimeFilter":{ "shape":"ExecutionTimeFilter", - "documentation":"

If specified, the workflow executions are included in the returned results based on whether their close times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their close times.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both." + "documentation":"

If specified, the workflow executions are included in the returned results based on whether their close times are within the range specified by this filter. Also, if this parameter is specified, the returned results are ordered by their close times.

startTimeFilter and closeTimeFilter are mutually exclusive. You must specify one of these in a request but not both.

" }, "executionFilter":{ "shape":"WorkflowExecutionFilter", - "documentation":"

If specified, only workflow executions matching the workflow ID specified in the filter are returned.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions matching the workflow ID specified in the filter are returned.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "closeStatusFilter":{ "shape":"CloseStatusFilter", - "documentation":"

If specified, only workflow executions that match this close status are listed. For example, if TERMINATED is specified, then only TERMINATED workflow executions are listed.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions that match this close status are listed. For example, if TERMINATED is specified, then only TERMINATED workflow executions are listed.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "typeFilter":{ "shape":"WorkflowTypeFilter", - "documentation":"

If specified, only executions of the type specified in the filter are returned.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions of the type specified in the filter are returned.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "tagFilter":{ "shape":"TagFilter", - "documentation":"

If specified, only executions that have the matching tag are listed.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions that have the matching tag are listed.

closeStatusFilter, executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "nextPageToken":{ "shape":"PageToken", @@ -2755,7 +2442,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2777,7 +2464,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2802,11 +2489,11 @@ }, "typeFilter":{ "shape":"WorkflowTypeFilter", - "documentation":"

If specified, only executions of the type specified in the filter are returned.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions of the type specified in the filter are returned.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "tagFilter":{ "shape":"TagFilter", - "documentation":"

If specified, only executions that have the matching tag are listed.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only executions that have the matching tag are listed.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" }, "nextPageToken":{ "shape":"PageToken", @@ -2814,7 +2501,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2822,7 +2509,7 @@ }, "executionFilter":{ "shape":"WorkflowExecutionFilter", - "documentation":"

If specified, only workflow executions matching the workflow ID specified in the filter are returned.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request." + "documentation":"

If specified, only workflow executions matching the workflow ID specified in the filter are returned.

executionFilter, typeFilter and tagFilter are mutually exclusive. You can specify at most one of these in a request.

" } } }, @@ -2851,7 +2538,7 @@ }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2861,8 +2548,8 @@ }, "MarkerName":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "MarkerRecordedEventAttributes":{ "type":"structure", @@ -2877,24 +2564,24 @@ }, "details":{ "shape":"Data", - "documentation":"

Details of the marker (if any).

" + "documentation":"

The details of the marker.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the RecordMarker decision that requested this marker. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the MarkerRecorded event.

" + "documentation":"

Provides the details of the MarkerRecorded event.

" }, "Name":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "OpenDecisionTasksCount":{ "type":"integer", - "min":0, - "max":1 + "max":1, + "min":0 }, "OperationNotPermittedFault":{ "type":"structure", @@ -2904,13 +2591,13 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned when the caller does not have sufficient permissions to invoke the action.

" + "documentation":"

Returned when the caller doesn't have sufficient permissions to invoke the action.

", + "exception":true }, "PageSize":{ "type":"integer", - "min":0, - "max":1000 + "max":1000, + "min":0 }, "PageToken":{ "type":"string", @@ -2944,7 +2631,7 @@ }, "taskList":{ "shape":"TaskList", - "documentation":"

Specifies the task list to poll for activity tasks.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

Specifies the task list to poll for activity tasks.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "identity":{ "shape":"Identity", @@ -2965,7 +2652,7 @@ }, "taskList":{ "shape":"TaskList", - "documentation":"

Specifies the task list to poll for decision tasks.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

Specifies the task list to poll for decision tasks.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "identity":{ "shape":"Identity", @@ -2973,11 +2660,11 @@ }, "nextPageToken":{ "shape":"PageToken", - "documentation":"

If a NextPageToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextPageToken. Keep all other arguments unchanged.

The configured maximumPageSize determines how many results can be returned in a single call.

The nextPageToken returned by this action cannot be used with GetWorkflowExecutionHistory to get the next page. You must call PollForDecisionTask again (with the nextPageToken) to retrieve the next page of history records. Calling PollForDecisionTask with a nextPageToken will not return a new decision task.." + "documentation":"

If a NextPageToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextPageToken. Keep all other arguments unchanged.

The configured maximumPageSize determines how many results can be returned in a single call.

The nextPageToken returned by this action cannot be used with GetWorkflowExecutionHistory to get the next page. You must call PollForDecisionTask again (with the nextPageToken) to retrieve the next page of history records. Calling PollForDecisionTask with a nextPageToken doesn't return a new decision task.

" }, "maximumPageSize":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. nextPageToken can be used to obtain futher pages of results. The default is 1000, which is the maximum allowed page size. You can, however, specify a page size smaller than the maximum.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -2991,7 +2678,7 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results. " + "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results.

" }, "details":{ "shape":"LimitedData", @@ -3005,14 +2692,14 @@ "members":{ "markerName":{ "shape":"MarkerName", - "documentation":"

Required. The name of the marker.

" + "documentation":"

The name of the marker.

" }, "details":{ "shape":"Data", - "documentation":"

Optional. details of the marker.

" + "documentation":"

The details of the marker.

" } }, - "documentation":"

Provides details of the RecordMarker decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the RecordMarker decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RecordMarkerFailedCause":{ "type":"string", @@ -3032,14 +2719,14 @@ }, "cause":{ "shape":"RecordMarkerFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the RecordMarkerFailed decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the RecordMarkerFailed event.

" + "documentation":"

Provides the details of the RecordMarkerFailed event.

" }, "RegisterActivityTypeInput":{ "type":"structure", @@ -3055,11 +2742,11 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the activity type within the domain.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The name of the activity type within the domain.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "version":{ "shape":"Version", - "documentation":"

The version of the activity type.

The activity type consists of the name and version, the combination of which must be unique within the domain.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The version of the activity type.

The activity type consists of the name and version, the combination of which must be unique within the domain.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "description":{ "shape":"Description", @@ -3067,27 +2754,27 @@ }, "defaultTaskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum duration that a worker can take to process tasks of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the default maximum duration that a worker can take to process tasks of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskHeartbeatTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum time before which a worker processing a task of this type must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. This default can be overridden when scheduling an activity task using the ScheduleActivityTask decision. If the activity worker subsequently attempts to record a heartbeat or returns a result, the activity worker receives an UnknownResource fault. In this case, Amazon SWF no longer considers the activity task to be valid; the activity worker should clean up the activity task.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the default maximum time before which a worker processing a task of this type must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision. If the activity worker subsequently attempts to record a heartbeat or returns a result, the activity worker receives an UnknownResource fault. In this case, Amazon SWF no longer considers the activity task to be valid; the activity worker should clean up the activity task.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskList":{ "shape":"TaskList", - "documentation":"

If set, specifies the default task list to use for scheduling tasks of this activity type. This default task list is used if a task list is not provided when a task is scheduled through the ScheduleActivityTask decision.

" + "documentation":"

If set, specifies the default task list to use for scheduling tasks of this activity type. This default task list is used if a task list isn't provided when a task is scheduled through the ScheduleActivityTask Decision.

" }, "defaultTaskPriority":{ "shape":"TaskPriority", - "documentation":"

The default task priority to assign to the activity type. If not assigned, then \"0\" will be used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The default task priority to assign to the activity type. If not assigned, then 0 is used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the in the Amazon SWF Developer Guide..

" }, "defaultTaskScheduleToStartTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum duration that a task of this activity type can wait before being assigned to a worker. This default can be overridden when scheduling an activity task using the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the default maximum duration that a task of this activity type can wait before being assigned to a worker. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskScheduleToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum duration for a task of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the default maximum duration for a task of this activity type. This default can be overridden when scheduling an activity task using the ScheduleActivityTask Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" } } }, @@ -3100,7 +2787,7 @@ "members":{ "name":{ "shape":"DomainName", - "documentation":"

Name of the domain to register. The name must be unique in the region that the domain is registered in.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

Name of the domain to register. The name must be unique in the region that the domain is registered in.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "description":{ "shape":"Description", @@ -3108,7 +2795,7 @@ }, "workflowExecutionRetentionPeriodInDays":{ "shape":"DurationInDays", - "documentation":"

The duration (in days) that records and histories of workflow executions on the domain should be kept by the service. After the retention period, the workflow execution is not available in the results of visibility calls.

If you pass the value NONE or 0 (zero), then the workflow execution history will not be retained. As soon as the workflow execution completes, the execution record and its history are deleted.

The maximum workflow execution retention period is 90 days. For more information about Amazon SWF service limits, see: Amazon SWF Service Limits in the Amazon SWF Developer Guide.

" + "documentation":"

The duration (in days) that records and histories of workflow executions on the domain should be kept by the service. After the retention period, the workflow execution isn't available in the results of visibility calls.

If you pass the value NONE or 0 (zero), then the workflow execution history isn't retained. As soon as the workflow execution completes, the execution record and its history are deleted.

The maximum workflow execution retention period is 90 days. For more information about Amazon SWF service limits, see: Amazon SWF Service Limits in the Amazon SWF Developer Guide.

" } } }, @@ -3126,11 +2813,11 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the workflow type.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The name of the workflow type.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "version":{ "shape":"Version", - "documentation":"

The version of the workflow type.

The workflow type consists of the name and version, the combination of which must be unique within the domain. To get a list of all currently registered workflow types, use the ListWorkflowTypes action.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The version of the workflow type.

The workflow type consists of the name and version, the combination of which must be unique within the domain. To get a list of all currently registered workflow types, use the ListWorkflowTypes action.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "description":{ "shape":"Description", @@ -3138,27 +2825,27 @@ }, "defaultTaskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum duration of decision tasks for this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the default maximum duration of decision tasks for this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultExecutionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the default maximum duration for executions of this workflow type. You can override this default when starting an execution through the StartWorkflowExecution action or StartChildWorkflowExecution decision.

The duration is specified in seconds; an integer greater than or equal to 0. Unlike some of the other timeout parameters in Amazon SWF, you cannot specify a value of \"NONE\" for defaultExecutionStartToCloseTimeout; there is a one-year max limit on the time that a workflow execution can run. Exceeding this limit will always cause the workflow execution to time out.

" + "documentation":"

If set, specifies the default maximum duration for executions of this workflow type. You can override this default when starting an execution through the StartWorkflowExecution Action or StartChildWorkflowExecution Decision.

The duration is specified in seconds; an integer greater than or equal to 0. Unlike some of the other timeout parameters in Amazon SWF, you cannot specify a value of \"NONE\" for defaultExecutionStartToCloseTimeout; there is a one-year max limit on the time that a workflow execution can run. Exceeding this limit always causes the workflow execution to time out.

" }, "defaultTaskList":{ "shape":"TaskList", - "documentation":"

If set, specifies the default task list to use for scheduling decision tasks for executions of this workflow type. This default is used only if a task list is not provided when starting the execution through the StartWorkflowExecution action or StartChildWorkflowExecution decision.

" + "documentation":"

If set, specifies the default task list to use for scheduling decision tasks for executions of this workflow type. This default is used only if a task list isn't provided when starting the execution through the StartWorkflowExecution Action or StartChildWorkflowExecution Decision.

" }, "defaultTaskPriority":{ "shape":"TaskPriority", - "documentation":"

The default task priority to assign to the workflow type. If not assigned, then \"0\" will be used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The default task priority to assign to the workflow type. If not assigned, then 0 is used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "defaultChildPolicy":{ "shape":"ChildPolicy", - "documentation":"

If set, specifies the default policy to use for the child workflow executions when a workflow execution of this type is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

The supported child policies are:

" + "documentation":"

If set, specifies the default policy to use for the child workflow executions when a workflow execution of this type is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

The supported child policies are:

" }, "defaultLambdaRole":{ "shape":"Arn", - "documentation":"

The ARN of the default IAM role to use when a workflow execution of this type invokes AWS Lambda functions.

This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution and ContinueAsNewWorkflowExecution decision.

" + "documentation":"

The default IAM role attached to this workflow type.

Executions of this workflow type need IAM roles to invoke Lambda functions. If you don't specify an IAM role when you start this workflow type, the default Lambda role is attached to the execution. For more information, see http://docs.aws.amazon.com/amazonswf/latest/developerguide/lambda-task.html in the Amazon SWF Developer Guide.

" } } }, @@ -3178,7 +2865,7 @@ "documentation":"

The activityId of the activity task to be canceled.

" } }, - "documentation":"

Provides details of the RequestCancelActivityTask decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the RequestCancelActivityTask decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RequestCancelActivityTaskFailedCause":{ "type":"string", @@ -3201,14 +2888,14 @@ }, "cause":{ "shape":"RequestCancelActivityTaskFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the RequestCancelActivityTask decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the RequestCancelActivityTaskFailed event.

" + "documentation":"

Provides the details of the RequestCancelActivityTaskFailed event.

" }, "RequestCancelExternalWorkflowExecutionDecisionAttributes":{ "type":"structure", @@ -3216,18 +2903,18 @@ "members":{ "workflowId":{ "shape":"WorkflowId", - "documentation":"

Required. The workflowId of the external workflow execution to cancel.

" + "documentation":"

The workflowId of the external workflow execution to cancel.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the external workflow execution to cancel.

" }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks.

" + "documentation":"

The data attached to the event that can be used by the decider in subsequent workflow tasks.

" } }, - "documentation":"

Provides details of the RequestCancelExternalWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the RequestCancelExternalWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "RequestCancelExternalWorkflowExecutionFailedCause":{ "type":"string", @@ -3251,12 +2938,12 @@ "documentation":"

The workflowId of the external workflow to which the cancel request was to be delivered.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the external workflow execution.

" }, "cause":{ "shape":"RequestCancelExternalWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "initiatedEventId":{ "shape":"EventId", @@ -3266,9 +2953,12 @@ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the RequestCancelExternalWorkflowExecution decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, - "control":{"shape":"Data"} + "control":{ + "shape":"Data", + "documentation":"

The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.

" + } }, - "documentation":"

Provides details of the RequestCancelExternalWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the RequestCancelExternalWorkflowExecutionFailed event.

" }, "RequestCancelExternalWorkflowExecutionInitiatedEventAttributes":{ "type":"structure", @@ -3282,7 +2972,7 @@ "documentation":"

The workflowId of the external workflow execution to be canceled.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the external workflow execution to be canceled.

" }, "decisionTaskCompletedEventId":{ @@ -3291,10 +2981,10 @@ }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent workflow tasks.

" } }, - "documentation":"

Provides details of the RequestCancelExternalWorkflowExecutionInitiated event.

" + "documentation":"

Provides the details of the RequestCancelExternalWorkflowExecutionInitiated event.

" }, "RequestCancelWorkflowExecutionInput":{ "type":"structure", @@ -3312,7 +3002,7 @@ "documentation":"

The workflowId of the workflow execution to cancel.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the workflow execution to cancel.

" } } @@ -3323,11 +3013,11 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results." + "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results.

" }, "details":{ "shape":"Data", - "documentation":"

Optional. Information about the cancellation.

" + "documentation":"

Information about the cancellation.

" } } }, @@ -3337,7 +3027,7 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results." + "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results.

" }, "result":{ "shape":"Data", @@ -3351,7 +3041,7 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results." + "documentation":"

The taskToken of the ActivityTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results.

" }, "reason":{ "shape":"FailureReason", @@ -3359,7 +3049,7 @@ }, "details":{ "shape":"Data", - "documentation":"

Optional. Detailed information about the failure.

" + "documentation":"

Detailed information about the failure.

" } } }, @@ -3369,38 +3059,30 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The taskToken from the DecisionTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results." + "documentation":"

The taskToken from the DecisionTask.

taskToken is generated by the service and should be treated as an opaque value. If the task is passed to another process, its taskToken must also be passed. This enables it to provide its progress and respond with results.

" }, "decisions":{ "shape":"DecisionList", - "documentation":"

The list of decisions (possibly empty) made by the decider while processing this decision task. See the docs for the decision structure for details.

" + "documentation":"

The list of decisions (possibly empty) made by the decider while processing this decision task. See the docs for the Decision structure for details.

" }, "executionContext":{ "shape":"Data", "documentation":"

User defined context to add to workflow execution.

" } - } + }, + "documentation":"

Input data for a TaskCompleted response to a decision task.

" }, "ReverseOrder":{"type":"boolean"}, "Run":{ "type":"structure", "members":{ "runId":{ - "shape":"RunId", + "shape":"WorkflowRunId", "documentation":"

The runId of a workflow execution. This ID is generated by the service and can be used to uniquely identify the workflow execution within a domain.

" } }, "documentation":"

Specifies the runId of a workflow execution.

" }, - "RunId":{ - "type":"string", - "min":1, - "max":64 - }, - "RunIdOptional":{ - "type":"string", - "max":64 - }, "ScheduleActivityTaskDecisionAttributes":{ "type":"structure", "required":[ @@ -3410,15 +3092,15 @@ "members":{ "activityType":{ "shape":"ActivityType", - "documentation":"

Required. The type of the activity task to schedule.

" + "documentation":"

The type of the activity task to schedule.

" }, "activityId":{ "shape":"ActivityId", - "documentation":"

Required. The activityId of the activity task.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The activityId of the activity task.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks. This data is not sent to the activity.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent workflow tasks. This data isn't sent to the activity.

" }, "input":{ "shape":"Data", @@ -3426,30 +3108,30 @@ }, "scheduleToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration for this activity task.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A schedule-to-close timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default schedule-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

The maximum duration for this activity task.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A schedule-to-close timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default schedule-to-close timeout was specified at registration time then a fault is returned.

" }, "taskList":{ "shape":"TaskList", - "documentation":"

If set, specifies the name of the task list in which to schedule the activity task. If not specified, the defaultTaskList registered with the activity type will be used.

A task list for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default task list was specified at registration time then a fault will be returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

If set, specifies the name of the task list in which to schedule the activity task. If not specified, the defaultTaskList registered with the activity type is used.

A task list for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default task list was specified at registration time then a fault is returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. If set, specifies the priority with which the activity task is to be assigned to a worker. This overrides the defaultTaskPriority specified when registering the activity type using RegisterActivityType. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

If set, specifies the priority with which the activity task is to be assigned to a worker. This overrides the defaultTaskPriority specified when registering the activity type using RegisterActivityType. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "scheduleToStartTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. If set, specifies the maximum duration the activity task can wait to be assigned to a worker. This overrides the default schedule-to-start timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A schedule-to-start timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default schedule-to-start timeout was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the maximum duration the activity task can wait to be assigned to a worker. This overrides the default schedule-to-start timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A schedule-to-start timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default schedule-to-start timeout was specified at registration time then a fault is returned.

" }, "startToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the maximum duration a worker may take to process this activity task. This overrides the default start-to-close timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A start-to-close timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the maximum duration a worker may take to process this activity task. This overrides the default start-to-close timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A start-to-close timeout for this activity task must be specified either as a default for the activity type or through this field. If neither this field is set nor a default start-to-close timeout was specified at registration time then a fault is returned.

" }, "heartbeatTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the maximum time before which a worker processing a task of this type must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or returns a result, it will be ignored. This overrides the default heartbeat timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

If set, specifies the maximum time before which a worker processing a task of this type must report progress by calling RecordActivityTaskHeartbeat. If the timeout is exceeded, the activity task is automatically timed out. If the worker subsequently attempts to record a heartbeat or returns a result, it is ignored. This overrides the default heartbeat timeout specified when registering the activity type using RegisterActivityType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" } }, - "documentation":"

Provides details of the ScheduleActivityTask decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the ScheduleActivityTask decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "ScheduleActivityTaskFailedCause":{ "type":"string", @@ -3486,14 +3168,14 @@ }, "cause":{ "shape":"ScheduleActivityTaskFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision that resulted in the scheduling of this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the ScheduleActivityTaskFailed event.

" + "documentation":"

Provides the details of the ScheduleActivityTaskFailed event.

" }, "ScheduleLambdaFunctionDecisionAttributes":{ "type":"structure", @@ -3504,22 +3186,26 @@ "members":{ "id":{ "shape":"FunctionId", - "documentation":"

Required. The SWF id of the AWS Lambda task.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

A string that identifies the Lambda function execution in the event history.

" }, "name":{ "shape":"FunctionName", - "documentation":"

Required. The name of the AWS Lambda function to invoke.

" + "documentation":"

The name, or ARN, of the Lambda function to schedule.

" + }, + "control":{ + "shape":"Data", + "documentation":"

The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the Lambda task.

" }, "input":{ "shape":"FunctionInput", - "documentation":"

The input provided to the AWS Lambda function.

" + "documentation":"

The optional input data to be supplied to the Lambda function.

" }, "startToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

If set, specifies the maximum duration the function may take to execute.

" + "documentation":"

The timeout value, in seconds, after which the Lambda function is considered to be failed once it has started. This can be any integer from 1-300 (1s-5m). If no value is supplied, than a default value of 300s is assumed.

" } }, - "documentation":"

Provides details of the ScheduleLambdaFunction decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Decision attributes specified in scheduleLambdaFunctionDecisionAttributes within the list of decisions decisions passed to RespondDecisionTaskCompleted.

" }, "ScheduleLambdaFunctionFailedCause":{ "type":"string", @@ -3541,22 +3227,22 @@ "members":{ "id":{ "shape":"FunctionId", - "documentation":"

The unique Amazon SWF ID of the AWS Lambda task.

" + "documentation":"

The ID provided in the ScheduleLambdaFunction decision that failed.

" }, "name":{ "shape":"FunctionName", - "documentation":"

The name of the scheduled AWS Lambda function.

" + "documentation":"

The name of the Lambda function.

" }, "cause":{ "shape":"ScheduleLambdaFunctionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", - "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision that resulted in the scheduling of this AWS Lambda function. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the LambdaFunctionCompleted event corresponding to the decision that resulted in scheduling this Lambda task. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details for the ScheduleLambdaFunctionFailed event.

" + "documentation":"

Provides the details of the ScheduleLambdaFunctionFailed event. It isn't set for other event types.

" }, "SignalExternalWorkflowExecutionDecisionAttributes":{ "type":"structure", @@ -3567,26 +3253,26 @@ "members":{ "workflowId":{ "shape":"WorkflowId", - "documentation":"

Required. The workflowId of the workflow execution to be signaled.

" + "documentation":"

The workflowId of the workflow execution to be signaled.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the workflow execution to be signaled.

" }, "signalName":{ "shape":"SignalName", - "documentation":"

Required. The name of the signal.The target workflow execution will use the signal name and input to process the signal.

" + "documentation":"

The name of the signal.The target workflow execution uses the signal name and input to process the signal.

" }, "input":{ "shape":"Data", - "documentation":"

Optional. Input data to be provided with the signal. The target workflow execution will use the signal name and input data to process the signal.

" + "documentation":"

The input data to be provided with the signal. The target workflow execution uses the signal name and input data to process the signal.

" }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent decision tasks.

" + "documentation":"

The data attached to the event that can be used by the decider in subsequent decision tasks.

" } }, - "documentation":"

Provides details of the SignalExternalWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the SignalExternalWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "SignalExternalWorkflowExecutionFailedCause":{ "type":"string", @@ -3610,12 +3296,12 @@ "documentation":"

The workflowId of the external workflow execution that the signal was being delivered to.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the external workflow execution that the signal was being delivered to.

" }, "cause":{ "shape":"SignalExternalWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "initiatedEventId":{ "shape":"EventId", @@ -3625,9 +3311,12 @@ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the SignalExternalWorkflowExecution decision for this signal. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, - "control":{"shape":"Data"} + "control":{ + "shape":"Data", + "documentation":"

The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the workflow execution.

" + } }, - "documentation":"

Provides details of the SignalExternalWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the SignalExternalWorkflowExecutionFailed event.

" }, "SignalExternalWorkflowExecutionInitiatedEventAttributes":{ "type":"structure", @@ -3642,7 +3331,7 @@ "documentation":"

The workflowId of the external workflow execution.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the external workflow execution to send the signal to.

" }, "signalName":{ @@ -3651,7 +3340,7 @@ }, "input":{ "shape":"Data", - "documentation":"

Input provided to the signal (if any).

" + "documentation":"

The input provided to the signal.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", @@ -3659,15 +3348,15 @@ }, "control":{ "shape":"Data", - "documentation":"

Optional. data attached to the event that can be used by the decider in subsequent decision tasks.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent decision tasks.

" } }, - "documentation":"

Provides details of the SignalExternalWorkflowExecutionInitiated event.

" + "documentation":"

Provides the details of the SignalExternalWorkflowExecutionInitiated event.

" }, "SignalName":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "SignalWorkflowExecutionInput":{ "type":"structure", @@ -3686,7 +3375,7 @@ "documentation":"

The workflowId of the workflow execution to signal.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the workflow execution to signal.

" }, "signalName":{ @@ -3708,15 +3397,15 @@ "members":{ "workflowType":{ "shape":"WorkflowType", - "documentation":"

Required. The type of the workflow execution to be started.

" + "documentation":"

The type of the workflow execution to be started.

" }, "workflowId":{ "shape":"WorkflowId", - "documentation":"

Required. The workflowId of the workflow execution.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The workflowId of the workflow execution.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks. This data is not sent to the child workflow execution.

" + "documentation":"

The data attached to the event that can be used by the decider in subsequent workflow tasks. This data isn't sent to the child workflow execution.

" }, "input":{ "shape":"Data", @@ -3724,23 +3413,23 @@ }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

An execution start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default execution start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

The total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

An execution start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default execution start-to-close timeout was specified at registration time then a fault is returned.

" }, "taskList":{ "shape":"TaskList", - "documentation":"

The name of the task list to be used for decision tasks of the child workflow execution.

A task list for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task list was specified at registration time then a fault will be returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The name of the task list to be used for decision tasks of the child workflow execution.

A task list for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task list was specified at registration time then a fault is returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. A task priority that, if set, specifies the priority for a decision task of this workflow execution. This overrides the defaultTaskPriority specified when registering the workflow type. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

A task priority that, if set, specifies the priority for a decision task of this workflow execution. This overrides the defaultTaskPriority specified when registering the workflow type. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Specifies the maximum duration of decision tasks for this workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A task start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

Specifies the maximum duration of decision tasks for this workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A task start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault is returned.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

Optional. If set, specifies the policy to use for the child workflow executions if the workflow execution being started is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the policy to use for the child workflow executions if the workflow execution being started is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault is returned.

" }, "tagList":{ "shape":"TagList", @@ -3748,10 +3437,10 @@ }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The ARN of an IAM role that authorizes Amazon SWF to invoke AWS Lambda functions.

In order for this workflow execution to invoke AWS Lambda functions, an appropriate IAM role must be specified either as a default for the workflow type or through this field." + "documentation":"

The IAM role attached to the child workflow execution.

" } }, - "documentation":"

Provides details of the StartChildWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the StartChildWorkflowExecution decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "StartChildWorkflowExecutionFailedCause":{ "type":"string", @@ -3781,11 +3470,11 @@ "members":{ "workflowType":{ "shape":"WorkflowType", - "documentation":"

The workflow type provided in the StartChildWorkflowExecution decision that failed.

" + "documentation":"

The workflow type provided in the StartChildWorkflowExecution Decision that failed.

" }, "cause":{ "shape":"StartChildWorkflowExecutionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

When cause is set to OPERATION_NOT_PERMITTED, the decision fails because it lacks sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "workflowId":{ "shape":"WorkflowId", @@ -3793,15 +3482,18 @@ }, "initiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

When the cause is WORKFLOW_ALREADY_RUNNING, initiatedEventId is the ID of the StartChildWorkflowExecutionInitiated event that corresponds to the StartChildWorkflowExecution Decision to start the workflow execution. You can use this information to diagnose problems by tracing back the chain of events leading up to this event.

When the cause isn't WORKFLOW_ALREADY_RUNNING, initiatedEventId is set to 0 because the StartChildWorkflowExecutionInitiated event doesn't exist.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", - "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartChildWorkflowExecution decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the cause of events.

" + "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartChildWorkflowExecution Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events.

" }, - "control":{"shape":"Data"} + "control":{ + "shape":"Data", + "documentation":"

The data attached to the event that the decider can use in subsequent workflow tasks. This data isn't sent to the child workflow execution.

" + } }, - "documentation":"

Provides details of the StartChildWorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the StartChildWorkflowExecutionFailed event.

" }, "StartChildWorkflowExecutionInitiatedEventAttributes":{ "type":"structure", @@ -3823,15 +3515,15 @@ }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent decision tasks. This data is not sent to the activity.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent decision tasks. This data isn't sent to the activity.

" }, "input":{ "shape":"Data", - "documentation":"

The inputs provided to the child workflow execution (if any).

" + "documentation":"

The inputs provided to the child workflow execution.

" }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration for the child workflow execution. If the workflow execution is not closed within this duration, it will be timed out and force terminated.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration for the child workflow execution. If the workflow execution isn't closed within this duration, it is timed out and force-terminated.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "taskList":{ "shape":"TaskList", @@ -3839,19 +3531,19 @@ }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. The priority assigned for the decision tasks for this workflow execution. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The priority assigned for the decision tasks for this workflow execution. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", - "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartChildWorkflowExecution decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the cause of events.

" + "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartChildWorkflowExecution Decision to request this child workflow execution. This information can be useful for diagnosing problems by tracing back the cause of events.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy to use for the child workflow executions if this execution gets terminated by explicitly calling the TerminateWorkflowExecution action or due to an expired timeout.

The supported child policies are:

" + "documentation":"

The policy to use for the child workflow executions if this execution gets terminated by explicitly calling the TerminateWorkflowExecution action or due to an expired timeout.

The supported child policies are:

" }, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration allowed for the decision tasks for this workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration allowed for the decision tasks for this workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "tagList":{ "shape":"TagList", @@ -3859,10 +3551,10 @@ }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The IAM role attached to this workflow execution to use when invoking AWS Lambda functions.

" + "documentation":"

The IAM role to attach to the child workflow execution.

" } }, - "documentation":"

Provides details of the StartChildWorkflowExecutionInitiated event.

" + "documentation":"

Provides the details of the StartChildWorkflowExecutionInitiated event.

" }, "StartLambdaFunctionFailedCause":{ "type":"string", @@ -3873,18 +3565,18 @@ "members":{ "scheduledEventId":{ "shape":"EventId", - "documentation":"

The ID of the LambdaFunctionScheduled event that was recorded when this AWS Lambda function was scheduled. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the ActivityTaskScheduled event that was recorded when this activity task was scheduled. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

" }, "cause":{ "shape":"StartLambdaFunctionFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. To help diagnose issues, use this information to trace back the chain of events leading up to this event.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because the IAM role attached to the execution lacked sufficient permissions. For details and example IAM policies, see Lambda Tasks in the Amazon SWF Developer Guide.

" }, "message":{ "shape":"CauseMessage", - "documentation":"

The error message (if any).

" + "documentation":"

A description that can help diagnose the cause of the fault.

" } }, - "documentation":"

Provides details for the StartLambdaFunctionFailed event.

" + "documentation":"

Provides the details of the StartLambdaFunctionFailed event. It isn't set for other event types.

" }, "StartTimerDecisionAttributes":{ "type":"structure", @@ -3895,18 +3587,18 @@ "members":{ "timerId":{ "shape":"TimerId", - "documentation":"

Required. The unique ID of the timer.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The unique ID of the timer.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks.

" + "documentation":"

The data attached to the event that can be used by the decider in subsequent workflow tasks.

" }, "startToFireTimeout":{ "shape":"DurationInSeconds", - "documentation":"

Required. The duration to wait before firing the timer.

The duration is specified in seconds; an integer greater than or equal to 0.

" + "documentation":"

The duration to wait before firing the timer.

The duration is specified in seconds, an integer greater than or equal to 0.

" } }, - "documentation":"

Provides details of the StartTimer decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller does not have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter will be set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows.

" + "documentation":"

Provides the details of the StartTimer decision.

Access Control

You can use IAM policies to control this decision's access to Amazon SWF resources as follows:

If the caller doesn't have sufficient permissions to invoke the action, or the parameter values fall outside the specified constraints, the action fails. The associated event attribute's cause parameter is set to OPERATION_NOT_PERMITTED. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "StartTimerFailedCause":{ "type":"string", @@ -3931,14 +3623,14 @@ }, "cause":{ "shape":"StartTimerFailedCause", - "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows." + "documentation":"

The cause of the failure. This information is generated by the system and can be useful for diagnostic purposes.

If cause is set to OPERATION_NOT_PERMITTED, the decision failed because it lacked sufficient permissions. For details and example IAM policies, see Using IAM to Manage Access to Amazon SWF Workflows in the Amazon SWF Developer Guide.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartTimer decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the StartTimerFailed event.

" + "documentation":"

Provides the details of the StartTimerFailed event.

" }, "StartWorkflowExecutionInput":{ "type":"structure", @@ -3954,7 +3646,7 @@ }, "workflowId":{ "shape":"WorkflowId", - "documentation":"

The user defined identifier associated with the workflow execution. You can use this to associate a custom identifier with the workflow execution. You may specify the same identifier if a workflow execution is logically a restart of a previous execution. You cannot have two open workflow executions with the same workflowId at the same time.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The user defined identifier associated with the workflow execution. You can use this to associate a custom identifier with the workflow execution. You may specify the same identifier if a workflow execution is logically a restart of a previous execution. You cannot have two open workflow executions with the same workflowId at the same time.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "workflowType":{ "shape":"WorkflowType", @@ -3962,11 +3654,11 @@ }, "taskList":{ "shape":"TaskList", - "documentation":"

The task list to use for the decision tasks generated for this workflow execution. This overrides the defaultTaskList specified when registering the workflow type.

A task list for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task list was specified at registration time then a fault will be returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f - \\u009f). Also, it must not contain the literal string quotarnquot.

" + "documentation":"

The task list to use for the decision tasks generated for this workflow execution. This overrides the defaultTaskList specified when registering the workflow type.

A task list for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task list was specified at registration time then a fault is returned.

The specified string must not start or end with whitespace. It must not contain a : (colon), / (slash), | (vertical bar), or any control characters (\\u0000-\\u001f | \\u007f-\\u009f). Also, it must not contain the literal string arn.

" }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

The task priority to use for this workflow execution. This will override any default priority that was assigned when the workflow type was registered. If not set, then the default task priority for the workflow type will be used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The task priority to use for this workflow execution. This overrides any default priority that was assigned when the workflow type was registered. If not set, then the default task priority for the workflow type is used. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "input":{ "shape":"Data", @@ -3974,7 +3666,7 @@ }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds; an integer greater than or equal to 0. Exceeding this limit will cause the workflow execution to time out. Unlike some of the other timeout parameters in Amazon SWF, you cannot specify a value of \"NONE\" for this timeout; there is a one-year max limit on the time that a workflow execution can run.

An execution start-to-close timeout must be specified either through this parameter or as a default when the workflow type is registered. If neither this parameter nor a default execution start-to-close timeout is specified, a fault is returned." + "documentation":"

The total duration for this workflow execution. This overrides the defaultExecutionStartToCloseTimeout specified when registering the workflow type.

The duration is specified in seconds; an integer greater than or equal to 0. Exceeding this limit causes the workflow execution to time out. Unlike some of the other timeout parameters in Amazon SWF, you cannot specify a value of \"NONE\" for this timeout; there is a one-year max limit on the time that a workflow execution can run.

An execution start-to-close timeout must be specified either through this parameter or as a default when the workflow type is registered. If neither this parameter nor a default execution start-to-close timeout is specified, a fault is returned.

" }, "tagList":{ "shape":"TagList", @@ -3982,22 +3674,22 @@ }, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Specifies the maximum duration of decision tasks for this workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

A task start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault will be returned." + "documentation":"

Specifies the maximum duration of decision tasks for this workflow execution. This parameter overrides the defaultTaskStartToCloseTimout specified when registering the workflow type using RegisterWorkflowType.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

A task start-to-close timeout for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default task start-to-close timeout was specified at registration time then a fault is returned.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

If set, specifies the policy to use for the child workflow executions of this workflow execution if it is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the policy to use for the child workflow executions of this workflow execution if it is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This policy overrides the default child policy specified when registering the workflow type using RegisterWorkflowType.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault is returned.

" }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The ARN of an IAM role that authorizes Amazon SWF to invoke AWS Lambda functions.

In order for this workflow execution to invoke AWS Lambda functions, an appropriate IAM role must be specified either as a default for the workflow type or through this field." + "documentation":"

The IAM role to attach to this workflow execution.

Executions of this workflow type need IAM roles to invoke Lambda functions. If you don't attach an IAM role, any attempt to schedule a Lambda task fails. This results in a ScheduleLambdaFunctionFailed history event. For more information, see http://docs.aws.amazon.com/amazonswf/latest/developerguide/lambda-task.html in the Amazon SWF Developer Guide.

" } } }, "Tag":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":0 }, "TagFilter":{ "type":"structure", @@ -4005,7 +3697,7 @@ "members":{ "tag":{ "shape":"Tag", - "documentation":"

Required. Specifies the tag that must be associated with the execution for it to meet the filter criteria.

" + "documentation":"

Specifies the tag that must be associated with the execution for it to meet the filter criteria.

" } }, "documentation":"

Used to filter the workflow executions in visibility APIs based on a tag.

" @@ -4026,14 +3718,11 @@ }, "documentation":"

Represents a task list.

" }, - "TaskPriority":{ - "type":"string", - "max":11 - }, + "TaskPriority":{"type":"string"}, "TaskToken":{ "type":"string", - "min":1, - "max":1024 + "max":1024, + "min":1 }, "TerminateReason":{ "type":"string", @@ -4055,20 +3744,20 @@ "documentation":"

The workflowId of the workflow execution to terminate.

" }, "runId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

The runId of the workflow execution to terminate.

" }, "reason":{ "shape":"TerminateReason", - "documentation":"

Optional. A descriptive reason for terminating the workflow execution.

" + "documentation":"

A descriptive reason for terminating the workflow execution.

" }, "details":{ "shape":"Data", - "documentation":"

Optional. Details for terminating the workflow execution.

" + "documentation":"

Details for terminating the workflow execution.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

If set, specifies the policy to use for the child workflow executions of the workflow execution being terminated. This policy overrides the child policy specified for the workflow execution at registration time or when starting the execution.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault will be returned." + "documentation":"

If set, specifies the policy to use for the child workflow executions of the workflow execution being terminated. This policy overrides the child policy specified for the workflow execution at registration time or when starting the execution.

The supported child policies are:

A child policy for this workflow execution must be specified either as a default for the workflow type or through this parameter. If neither this parameter is set nor a default child policy was specified at registration time then a fault is returned.

" } } }, @@ -4082,7 +3771,7 @@ "members":{ "timerId":{ "shape":"TimerId", - "documentation":"

The unique ID of the timer that was canceled.

" + "documentation":"

The unique ID of the timer that was canceled.

" }, "startedEventId":{ "shape":"EventId", @@ -4093,7 +3782,7 @@ "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CancelTimer decision to cancel this timer. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the TimerCanceled event.

" + "documentation":"

Provides the details of the TimerCanceled event.

" }, "TimerFiredEventAttributes":{ "type":"structure", @@ -4111,12 +3800,12 @@ "documentation":"

The ID of the TimerStarted event that was recorded when this timer was started. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the TimerFired event.

" + "documentation":"

Provides the details of the TimerFired event.

" }, "TimerId":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 }, "TimerStartedEventAttributes":{ "type":"structure", @@ -4132,18 +3821,18 @@ }, "control":{ "shape":"Data", - "documentation":"

Optional. Data attached to the event that can be used by the decider in subsequent workflow tasks.

" + "documentation":"

Data attached to the event that can be used by the decider in subsequent workflow tasks.

" }, "startToFireTimeout":{ "shape":"DurationInSeconds", - "documentation":"

The duration of time after which the timer will fire.

The duration is specified in seconds; an integer greater than or equal to 0.

" + "documentation":"

The duration of time after which the timer fires.

The duration is specified in seconds, an integer greater than or equal to 0.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the StartTimer decision for this activity task. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the TimerStarted event.

" + "documentation":"

Provides the details of the TimerStarted event.

" }, "Timestamp":{"type":"timestamp"}, "Truncated":{"type":"boolean"}, @@ -4155,8 +3844,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned if the type already exists in the specified domain. You will get this fault even if the existing type is in deprecated status. You can specify another version if the intent is to create a new distinct version of the type.

" + "documentation":"

Returned if the type already exists in the specified domain. You get this fault even if the existing type is in deprecated status. You can specify another version if the intent is to create a new distinct version of the type.

", + "exception":true }, "TypeDeprecatedFault":{ "type":"structure", @@ -4166,8 +3855,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned when the specified activity or workflow type was already deprecated.

" + "documentation":"

Returned when the specified activity or workflow type was already deprecated.

", + "exception":true }, "UnknownResourceFault":{ "type":"structure", @@ -4177,13 +3866,13 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

" + "documentation":"

Returned when the named resource cannot be found with in the scope of this operation (region or domain). This could happen if the named resource was never created or is no longer available for this operation.

", + "exception":true }, "Version":{ "type":"string", - "min":1, - "max":64 + "max":64, + "min":1 }, "VersionOptional":{ "type":"string", @@ -4201,7 +3890,7 @@ "documentation":"

The user defined identifier associated with the workflow execution.

" }, "runId":{ - "shape":"RunId", + "shape":"WorkflowRunId", "documentation":"

A system-generated unique identifier for the workflow execution.

" } }, @@ -4215,8 +3904,8 @@ "documentation":"

A description that may help with diagnosing the cause of the fault.

" } }, - "exception":true, - "documentation":"

Returned by StartWorkflowExecution when an open execution with the same workflowId is already running in the specified domain.

" + "documentation":"

Returned by StartWorkflowExecution when an open execution with the same workflowId is already running in the specified domain.

", + "exception":true }, "WorkflowExecutionCancelRequestedCause":{ "type":"string", @@ -4238,7 +3927,7 @@ "documentation":"

If set, indicates that the request to cancel the workflow execution was automatically generated, and specifies the cause. This happens if the parent workflow execution times out or is terminated, and the child policy is set to cancel child executions.

" } }, - "documentation":"

Provides details of the WorkflowExecutionCancelRequested event.

" + "documentation":"

Provides the details of the WorkflowExecutionCancelRequested event.

" }, "WorkflowExecutionCanceledEventAttributes":{ "type":"structure", @@ -4246,14 +3935,14 @@ "members":{ "details":{ "shape":"Data", - "documentation":"

Details for the cancellation (if any).

" + "documentation":"

The details of the cancellation.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CancelWorkflowExecution decision for this cancellation request. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the WorkflowExecutionCanceled event.

" + "documentation":"

Provides the details of the WorkflowExecutionCanceled event.

" }, "WorkflowExecutionCompletedEventAttributes":{ "type":"structure", @@ -4268,7 +3957,7 @@ "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the CompleteWorkflowExecution decision to complete this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the WorkflowExecutionCompleted event.

" + "documentation":"

Provides the details of the WorkflowExecutionCompleted event.

" }, "WorkflowExecutionConfiguration":{ "type":"structure", @@ -4281,11 +3970,11 @@ "members":{ "taskStartToCloseTimeout":{ "shape":"DurationInSeconds", - "documentation":"

The maximum duration allowed for decision tasks for this workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration allowed for decision tasks for this workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "executionStartToCloseTimeout":{ "shape":"DurationInSeconds", - "documentation":"

The total duration for this workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The total duration for this workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "taskList":{ "shape":"TaskList", @@ -4293,15 +3982,15 @@ }, "taskPriority":{ "shape":"TaskPriority", - "documentation":"

The priority assigned to decision tasks for this workflow execution. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The priority assigned to decision tasks for this workflow execution. Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" + "documentation":"

The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The IAM role used by this workflow execution when invoking AWS Lambda functions.

" + "documentation":"

The IAM role attached to the child workflow execution.

" } }, "documentation":"

The configuration settings for a workflow execution including timeout values, tasklist etc. These configuration settings are determined from the defaults specified when registering the workflow type and those specified when starting the workflow execution.

" @@ -4325,34 +4014,43 @@ "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the ContinueAsNewWorkflowExecution decision that started this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "newExecutionRunId":{ - "shape":"RunId", + "shape":"WorkflowRunId", "documentation":"

The runId of the new workflow execution.

" }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The total duration allowed for the new workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The total duration allowed for the new workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" + }, + "taskList":{ + "shape":"TaskList", + "documentation":"

The task list to use for the decisions of the new (continued) workflow execution.

" + }, + "taskPriority":{ + "shape":"TaskPriority", + "documentation":"

The priority of the task to use for the decisions of the new (continued) workflow execution.

" }, - "taskList":{"shape":"TaskList"}, - "taskPriority":{"shape":"TaskPriority"}, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration of decision tasks for the new workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration of decision tasks for the new workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" + "documentation":"

The policy to use for the child workflow executions of the new execution if it is terminated by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" }, "tagList":{ "shape":"TagList", "documentation":"

The list of tags associated with the new workflow execution.

" }, - "workflowType":{"shape":"WorkflowType"}, + "workflowType":{ + "shape":"WorkflowType", + "documentation":"

The workflow type of this execution.

" + }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The IAM role attached to this workflow execution to use when invoking AWS Lambda functions.

" + "documentation":"

The IAM role to attach to the new (continued) workflow execution.

" } }, - "documentation":"

Provides details of the WorkflowExecutionContinuedAsNew event.

" + "documentation":"

Provides the details of the WorkflowExecutionContinuedAsNew event.

" }, "WorkflowExecutionCount":{ "type":"structure", @@ -4367,7 +4065,7 @@ "documentation":"

If set to true, indicates that the actual count was more than the maximum supported by this API and the count returned is the truncated value.

" } }, - "documentation":"

Contains the count of workflow executions returned from CountOpenWorkflowExecutions or CountClosedWorkflowExecutions

" + "documentation":"

Contains the count of workflow executions returned from CountOpenWorkflowExecutions or CountClosedWorkflowExecutions

" }, "WorkflowExecutionDetail":{ "type":"structure", @@ -4406,18 +4104,18 @@ "members":{ "reason":{ "shape":"FailureReason", - "documentation":"

The descriptive reason provided for the failure (if any).

" + "documentation":"

The descriptive reason provided for the failure.

" }, "details":{ "shape":"Data", - "documentation":"

The details of the failure (if any).

" + "documentation":"

The details of the failure.

" }, "decisionTaskCompletedEventId":{ "shape":"EventId", "documentation":"

The ID of the DecisionTaskCompleted event corresponding to the decision task that resulted in the FailWorkflowExecution decision to fail this execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" } }, - "documentation":"

Provides details of the WorkflowExecutionFailed event.

" + "documentation":"

Provides the details of the WorkflowExecutionFailed event.

" }, "WorkflowExecutionFilter":{ "type":"structure", @@ -4461,7 +4159,7 @@ }, "closeStatus":{ "shape":"CloseStatus", - "documentation":"

If the execution status is closed then this specifies how the execution was closed:

" + "documentation":"

If the execution status is closed then this specifies how the execution was closed:

" }, "parent":{ "shape":"WorkflowExecution", @@ -4476,7 +4174,7 @@ "documentation":"

Set to true if a cancellation is requested for this workflow execution.

" } }, - "documentation":"

Contains information about a workflow execution.

" + "documentation":"

Contains information about a workflow execution.

" }, "WorkflowExecutionInfoList":{ "type":"list", @@ -4508,7 +4206,7 @@ "members":{ "openActivityTasks":{ "shape":"Count", - "documentation":"

The count of activity tasks whose status is OPEN.

" + "documentation":"

The count of activity tasks whose status is OPEN.

" }, "openDecisionTasks":{ "shape":"OpenDecisionTasksCount", @@ -4520,11 +4218,11 @@ }, "openChildWorkflowExecutions":{ "shape":"Count", - "documentation":"

The count of child workflow executions whose status is OPEN.

" + "documentation":"

The count of child workflow executions whose status is OPEN.

" }, "openLambdaFunctions":{ "shape":"Count", - "documentation":"

The count of AWS Lambda functions that are currently executing.

" + "documentation":"

The count of Lambda tasks whose status is OPEN.

" } }, "documentation":"

Contains the counts of open tasks, child workflow executions and timers for a workflow execution.

" @@ -4539,7 +4237,7 @@ }, "input":{ "shape":"Data", - "documentation":"

Inputs provided with the signal (if any). The decider can use the signal name and inputs to determine how to process the signal.

" + "documentation":"

The inputs provided with the signal. The decider can use the signal name and inputs to determine how to process the signal.

" }, "externalWorkflowExecution":{ "shape":"WorkflowExecution", @@ -4550,7 +4248,7 @@ "documentation":"

The ID of the SignalExternalWorkflowExecutionInitiated event corresponding to the SignalExternalWorkflow decision to signal this workflow execution.The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event. This field is set only if the signal was initiated by another workflow execution.

" } }, - "documentation":"

Provides details of the WorkflowExecutionSignaled event.

" + "documentation":"

Provides the details of the WorkflowExecutionSignaled event.

" }, "WorkflowExecutionStartedEventAttributes":{ "type":"structure", @@ -4562,24 +4260,28 @@ "members":{ "input":{ "shape":"Data", - "documentation":"

The input provided to the workflow execution (if any).

" + "documentation":"

The input provided to the workflow execution.

" }, "executionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration for this workflow execution.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration for this workflow execution.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "taskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

The maximum duration of decision tasks for this workflow type.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The maximum duration of decision tasks for this workflow type.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" + "documentation":"

The policy to use for the child workflow executions if this workflow execution is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout.

The supported child policies are:

" }, "taskList":{ "shape":"TaskList", "documentation":"

The name of the task list for scheduling the decision tasks for this workflow execution.

" }, + "taskPriority":{ + "shape":"TaskPriority", + "documentation":"

The priority of the decision tasks in the workflow execution.

" + }, "workflowType":{ "shape":"WorkflowType", "documentation":"

The workflow type of this execution.

" @@ -4588,22 +4290,21 @@ "shape":"TagList", "documentation":"

The list of tags associated with this workflow execution. An execution can have up to 5 tags.

" }, - "taskPriority":{"shape":"TaskPriority"}, "continuedExecutionRunId":{ - "shape":"RunIdOptional", + "shape":"WorkflowRunIdOptional", "documentation":"

If this workflow execution was started due to a ContinueAsNewWorkflowExecution decision, then it contains the runId of the previous workflow execution that was closed and continued as this execution.

" }, "parentWorkflowExecution":{ "shape":"WorkflowExecution", - "documentation":"

The source workflow execution that started this workflow execution. The member is not set if the workflow execution was not started by a workflow.

" + "documentation":"

The source workflow execution that started this workflow execution. The member isn't set if the workflow execution was not started by a workflow.

" }, "parentInitiatedEventId":{ "shape":"EventId", - "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution decision to start this workflow execution. The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" + "documentation":"

The ID of the StartChildWorkflowExecutionInitiated event corresponding to the StartChildWorkflowExecution Decision to start this workflow execution. The source event with this ID can be found in the history of the source workflow execution. This information can be useful for diagnosing problems by tracing back the chain of events leading up to this event.

" }, "lambdaRole":{ "shape":"Arn", - "documentation":"

The IAM role attached to this workflow execution to use when invoking AWS Lambda functions.

" + "documentation":"

The IAM role attached to the workflow execution.

" } }, "documentation":"

Provides details of WorkflowExecutionStarted event.

" @@ -4622,22 +4323,22 @@ "members":{ "reason":{ "shape":"TerminateReason", - "documentation":"

The reason provided for the termination (if any).

" + "documentation":"

The reason provided for the termination.

" }, "details":{ "shape":"Data", - "documentation":"

The details provided for the termination (if any).

" + "documentation":"

The details provided for the termination.

" }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy used for the child workflow executions of this workflow execution.

The supported child policies are:

" + "documentation":"

The policy used for the child workflow executions of this workflow execution.

The supported child policies are:

" }, "cause":{ "shape":"WorkflowExecutionTerminatedCause", "documentation":"

If set, indicates that the workflow execution was automatically terminated, and specifies the cause. This happens if the parent workflow execution times out or is terminated and the child policy is set to terminate child executions.

" } }, - "documentation":"

Provides details of the WorkflowExecutionTerminated event.

" + "documentation":"

Provides the details of the WorkflowExecutionTerminated event.

" }, "WorkflowExecutionTimedOutEventAttributes":{ "type":"structure", @@ -4652,10 +4353,10 @@ }, "childPolicy":{ "shape":"ChildPolicy", - "documentation":"

The policy used for the child workflow executions of this workflow execution.

The supported child policies are:

" + "documentation":"

The policy used for the child workflow executions of this workflow execution.

The supported child policies are:

" } }, - "documentation":"

Provides details of the WorkflowExecutionTimedOut event.

" + "documentation":"

Provides the details of the WorkflowExecutionTimedOut event.

" }, "WorkflowExecutionTimeoutType":{ "type":"string", @@ -4663,8 +4364,17 @@ }, "WorkflowId":{ "type":"string", - "min":1, - "max":256 + "max":256, + "min":1 + }, + "WorkflowRunId":{ + "type":"string", + "max":64, + "min":1 + }, + "WorkflowRunIdOptional":{ + "type":"string", + "max":64 }, "WorkflowType":{ "type":"structure", @@ -4675,11 +4385,11 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

Required. The name of the workflow type.

The combination of workflow type name and version must be unique with in a domain." + "documentation":"

The name of the workflow type.

The combination of workflow type name and version must be unique with in a domain.

" }, "version":{ "shape":"Version", - "documentation":"

Required. The version of the workflow type.

The combination of workflow type name and version must be unique with in a domain." + "documentation":"

The version of the workflow type.

The combination of workflow type name and version must be unique with in a domain.

" } }, "documentation":"

Represents a workflow type.

" @@ -4689,27 +4399,27 @@ "members":{ "defaultTaskStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum duration, specified when registering the workflow type, that a decision task for executions of this workflow type might take before returning completion or failure. If the task does not close in the specified time then the task is automatically timed out and rescheduled. If the decider eventually reports a completion or failure, it is ignored. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum duration, specified when registering the workflow type, that a decision task for executions of this workflow type might take before returning completion or failure. If the task doesn'tdo close in the specified time then the task is automatically timed out and rescheduled. If the decider eventually reports a completion or failure, it is ignored. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultExecutionStartToCloseTimeout":{ "shape":"DurationInSecondsOptional", - "documentation":"

Optional. The default maximum duration, specified when registering the workflow type, for executions of this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

The duration is specified in seconds; an integer greater than or equal to 0. The value \"NONE\" can be used to specify unlimited duration.

" + "documentation":"

The default maximum duration, specified when registering the workflow type, for executions of this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

The duration is specified in seconds, an integer greater than or equal to 0. You can use NONE to specify unlimited duration.

" }, "defaultTaskList":{ "shape":"TaskList", - "documentation":"

Optional. The default task list, specified when registering the workflow type, for decisions tasks scheduled for workflow executions of this type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

" + "documentation":"

The default task list, specified when registering the workflow type, for decisions tasks scheduled for workflow executions of this type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

" }, "defaultTaskPriority":{ "shape":"TaskPriority", - "documentation":"

Optional. The default task priority, specified when registering the workflow type, for all decision tasks of this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon Simple Workflow Developer Guide.

" + "documentation":"

The default task priority, specified when registering the workflow type, for all decision tasks of this workflow type. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

Valid values are integers that range from Java's Integer.MIN_VALUE (-2147483648) to Integer.MAX_VALUE (2147483647). Higher numbers indicate higher priority.

For more information about setting task priority, see Setting Task Priority in the Amazon SWF Developer Guide.

" }, "defaultChildPolicy":{ "shape":"ChildPolicy", - "documentation":"

Optional. The default policy to use for the child workflow executions when a workflow execution of this type is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution decision.

The supported child policies are:

" + "documentation":"

The default policy to use for the child workflow executions when a workflow execution of this type is terminated, by calling the TerminateWorkflowExecution action explicitly or due to an expired timeout. This default can be overridden when starting a workflow execution using the StartWorkflowExecution action or the StartChildWorkflowExecution Decision.

The supported child policies are:

" }, "defaultLambdaRole":{ "shape":"Arn", - "documentation":"

The default IAM role to use when a workflow execution invokes a AWS Lambda function.

" + "documentation":"

The default IAM role attached to this workflow type.

Executions of this workflow type need IAM roles to invoke Lambda functions. If you don't specify an IAM role when starting this workflow type, the default Lambda role is attached to the execution. For more information, see http://docs.aws.amazon.com/amazonswf/latest/developerguide/lambda-task.html in the Amazon SWF Developer Guide.

" } }, "documentation":"

The configuration settings of a workflow type.

" @@ -4723,11 +4433,11 @@ "members":{ "typeInfo":{ "shape":"WorkflowTypeInfo", - "documentation":"

General information about the workflow type.

The status of the workflow type (returned in the WorkflowTypeInfo structure) can be one of the following.

" + "documentation":"

General information about the workflow type.

The status of the workflow type (returned in the WorkflowTypeInfo structure) can be one of the following.

" }, "configuration":{ "shape":"WorkflowTypeConfiguration", - "documentation":"

Configuration settings of the workflow type registered through RegisterWorkflowType

" + "documentation":"

Configuration settings of the workflow type registered through RegisterWorkflowType

" } }, "documentation":"

Contains details about a workflow type.

" @@ -4738,7 +4448,7 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

Required. Name of the workflow type.

" + "documentation":"

Name of the workflow type.

" }, "version":{ "shape":"VersionOptional", @@ -4798,6 +4508,5 @@ "documentation":"

Contains a paginated list of information structures about workflow types.

" } }, - "examples":{ - } + "documentation":"Amazon Simple Workflow Service

The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that use Amazon's cloud to coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your workflow. Coordinating tasks in a workflow involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application.

Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state.

This documentation serves as reference only. For a broader overview of the Amazon SWF programming model, see the Amazon SWF Developer Guide .

" } diff --git a/services/sqs/src/main/resources/codegen-resources/service-2.json b/services/sqs/src/main/resources/codegen-resources/service-2.json index faf50605d525..35e6ad0bd12b 100644 --- a/services/sqs/src/main/resources/codegen-resources/service-2.json +++ b/services/sqs/src/main/resources/codegen-resources/service-2.json @@ -21,7 +21,7 @@ "errors":[ {"shape":"OverLimit"} ], - "documentation":"

Adds a permission to a queue for a specific principal. This allows sharing access to the queue.

When you create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue. For more information about these permissions, see Shared Queues in the Amazon SQS Developer Guide.

AddPermission writes an Amazon-SQS-generated policy. If you want to write your own policy, use SetQueueAttributes to upload your policy. For more information about writing your own policy, see Using The Access Policy Language in the Amazon SQS Developer Guide.

Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:

&Attribute.1=this

&Attribute.2=that

" + "documentation":"

Adds a permission to a queue for a specific principal. This allows sharing access to the queue.

When you create a queue, you have full control access rights for the queue. Only you, the owner of the queue, can grant or deny permissions to the queue. For more information about these permissions, see Shared Queues in the Amazon Simple Queue Service Developer Guide.

AddPermission writes an Amazon-SQS-generated policy. If you want to write your own policy, use SetQueueAttributes to upload your policy. For more information about writing your own policy, see Using The Access Policy Language in the Amazon Simple Queue Service Developer Guide.

Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:

&Attribute.1=this

&Attribute.2=that

" }, "ChangeMessageVisibility":{ "name":"ChangeMessageVisibility", @@ -34,7 +34,7 @@ {"shape":"MessageNotInflight"}, {"shape":"ReceiptHandleIsInvalid"} ], - "documentation":"

Changes the visibility timeout of a specified message in a queue to a new value. The maximum allowed timeout value is 12 hours. Thus, you can't extend the timeout of a message in an existing queue to more than a total visibility timeout of 12 hours. For more information, see Visibility Timeout in the Amazon SQS Developer Guide.

For example, you have a message and with the default visibility timeout of 5 minutes. After 3 minutes, you call ChangeMessageVisiblity with a timeout of 10 minutes. At that time, the timeout for the message is extended by 10 minutes beyond the time of the ChangeMessageVisibility action. This results in a total visibility timeout of 13 minutes. You can continue to call the ChangeMessageVisibility to extend the visibility timeout to a maximum of 12 hours. If you try to extend the visibility timeout beyond 12 hours, your request is rejected.

A message is considered to be in flight after it's received from a queue by a consumer, but not yet deleted from the queue.

For standard queues, there can be a maximum of 120,000 inflight messages per queue. If you reach this limit, Amazon SQS returns the OverLimit error message. To avoid reaching the limit, you should delete messages from the queue after they're processed. You can also increase the number of queues you use to process your messages.

For FIFO queues, there can be a maximum of 20,000 inflight messages per queue. If you reach this limit, Amazon SQS returns no error messages.

If you attempt to set the VisibilityTimeout to a value greater than the maximum time left, Amazon SQS returns an error. Amazon SQS doesn't automatically recalculate and increase the timeout to the maximum remaining time.

Unlike with a queue, when you change the visibility timeout for a specific message the timeout value is applied immediately but isn't saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message reverts to the original timeout value (not to the value you set using the ChangeMessageVisibility action) the next time the message is received.

" + "documentation":"

Changes the visibility timeout of a specified message in a queue to a new value. The maximum allowed timeout value is 12 hours. Thus, you can't extend the timeout of a message in an existing queue to more than a total visibility timeout of 12 hours. For more information, see Visibility Timeout in the Amazon Simple Queue Service Developer Guide.

For example, you have a message with a visibility timeout of 5 minutes. After 3 minutes, you call ChangeMessageVisiblity with a timeout of 10 minutes. At that time, the timeout for the message is extended by 10 minutes beyond the time of the ChangeMessageVisibility action. This results in a total visibility timeout of 13 minutes. You can continue to call the ChangeMessageVisibility to extend the visibility timeout to a maximum of 12 hours. If you try to extend the visibility timeout beyond 12 hours, your request is rejected.

A message is considered to be in flight after it's received from a queue by a consumer, but not yet deleted from the queue.

For standard queues, there can be a maximum of 120,000 inflight messages per queue. If you reach this limit, Amazon SQS returns the OverLimit error message. To avoid reaching the limit, you should delete messages from the queue after they're processed. You can also increase the number of queues you use to process your messages.

For FIFO queues, there can be a maximum of 20,000 inflight messages per queue. If you reach this limit, Amazon SQS returns no error messages.

If you attempt to set the VisibilityTimeout to a value greater than the maximum time left, Amazon SQS returns an error. Amazon SQS doesn't automatically recalculate and increase the timeout to the maximum remaining time.

Unlike with a queue, when you change the visibility timeout for a specific message the timeout value is applied immediately but isn't saved in memory for that message. If you don't delete a message after it is received, the visibility timeout for the message reverts to the original timeout value (not to the value you set using the ChangeMessageVisibility action) the next time the message is received.

" }, "ChangeMessageVisibilityBatch":{ "name":"ChangeMessageVisibilityBatch", @@ -70,7 +70,7 @@ {"shape":"QueueDeletedRecently"}, {"shape":"QueueNameExists"} ], - "documentation":"

Creates a new standard or FIFO queue. You can pass one or more attributes in the request. Keep the following caveats in mind:

To successfully create a new queue, you must provide a queue name that adheres to the limits related to queues and is unique within the scope of your queues.

To get the queue URL, use the GetQueueUrl action. GetQueueUrl requires only the QueueName parameter. be aware of existing queue names:

Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:

&Attribute.1=this

&Attribute.2=that

" + "documentation":"

Creates a new standard or FIFO queue. You can pass one or more attributes in the request. Keep the following caveats in mind:

To successfully create a new queue, you must provide a queue name that adheres to the limits related to queues and is unique within the scope of your queues.

To get the queue URL, use the GetQueueUrl action. GetQueueUrl requires only the QueueName parameter. be aware of existing queue names:

Some actions take lists of parameters. These lists are specified using the param.n notation. Values of n are integers starting from 1. For example, a parameter list with two elements looks like this:

&Attribute.1=this

&Attribute.2=that

" }, "DeleteMessage":{ "name":"DeleteMessage", @@ -111,7 +111,7 @@ "requestUri":"/" }, "input":{"shape":"DeleteQueueRequest"}, - "documentation":"

Deletes the queue specified by the QueueUrl, even if the queue is empty. If the specified queue doesn't exist, Amazon SQS returns a successful response.

Be careful with the DeleteQueue action: When you delete a queue, any messages in the queue are no longer available.

When you delete a queue, the deletion process takes up to 60 seconds. Requests you send involving that queue during the 60 seconds might succeed. For example, a SendMessage request might succeed, but after 60 seconds the queue and the message you sent no longer exist.

When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.

" + "documentation":"

Deletes the queue specified by the QueueUrl, regardless of the queue's contents. If the specified queue doesn't exist, Amazon SQS returns a successful response.

Be careful with the DeleteQueue action: When you delete a queue, any messages in the queue are no longer available.

When you delete a queue, the deletion process takes up to 60 seconds. Requests you send involving that queue during the 60 seconds might succeed. For example, a SendMessage request might succeed, but after 60 seconds the queue and the message you sent no longer exist.

When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.

" }, "GetQueueAttributes":{ "name":"GetQueueAttributes", @@ -143,7 +143,7 @@ "errors":[ {"shape":"QueueDoesNotExist"} ], - "documentation":"

Returns the URL of an existing queue. This action provides a simple way to retrieve the URL of an Amazon SQS queue.

To access a queue that belongs to another AWS account, use the QueueOwnerAWSAccountId parameter to specify the account ID of the queue's owner. The queue's owner must grant you permission to access the queue. For more information about shared queue access, see AddPermission or see Shared Queues in the Amazon SQS Developer Guide.

" + "documentation":"

Returns the URL of an existing queue. This action provides a simple way to retrieve the URL of an Amazon SQS queue.

To access a queue that belongs to another AWS account, use the QueueOwnerAWSAccountId parameter to specify the account ID of the queue's owner. The queue's owner must grant you permission to access the queue. For more information about shared queue access, see AddPermission or see Shared Queues in the Amazon Simple Queue Service Developer Guide.

" }, "ListDeadLetterSourceQueues":{ "name":"ListDeadLetterSourceQueues", @@ -159,7 +159,20 @@ "errors":[ {"shape":"QueueDoesNotExist"} ], - "documentation":"

Returns a list of your queues that have the RedrivePolicy queue attribute configured with a dead letter queue.

For more information about using dead letter queues, see Using Amazon SQS Dead Letter Queues in the Amazon SQS Developer Guide.

" + "documentation":"

Returns a list of your queues that have the RedrivePolicy queue attribute configured with a dead-letter queue.

For more information about using dead-letter queues, see Using Amazon SQS Dead-Letter Queues in the Amazon Simple Queue Service Developer Guide.

" + }, + "ListQueueTags":{ + "name":"ListQueueTags", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListQueueTagsRequest"}, + "output":{ + "shape":"ListQueueTagsResult", + "resultWrapper":"ListQueueTagsResult" + }, + "documentation":"

List all cost allocation tags added to the specified Amazon SQS queue. For an overview, see Tagging Amazon SQS Queues in the Amazon Simple Queue Service Developer Guide.

When you use queue tags, keep the following guidelines in mind:

For a full list of tag restrictions, see Limits Related to Queues in the Amazon Simple Queue Service Developer Guide.

" }, "ListQueues":{ "name":"ListQueues", @@ -201,7 +214,7 @@ "errors":[ {"shape":"OverLimit"} ], - "documentation":"

Retrieves one or more messages (up to 10), from the specified queue. Using the WaitTimeSeconds parameter enables long-poll support. For more information, see Amazon SQS Long Polling in the Amazon SQS Developer Guide.

Short poll is the default behavior where a weighted random set of machines is sampled on a ReceiveMessage call. Thus, only the messages on the sampled machines are returned. If the number of messages in the queue is small (fewer than 1,000), you most likely get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response. If this happens, repeat the request.

For each message returned, the response includes the following:

The receipt handle is the identifier you must provide when deleting the message. For more information, see Queue and Message Identifiers in the Amazon SQS Developer Guide.

You can provide the VisibilityTimeout parameter in your request. The parameter is applied to the messages that Amazon SQS returns in the response. If you don't include the parameter, the overall visibility timeout for the queue is used for the returned messages. For more information, see Visibility Timeout in the Amazon SQS Developer Guide.

A message that isn't deleted or a message whose visibility isn't extended before the visibility timeout expires counts as a failed receive. Depending on the configuration of the queue, the message might be sent to the dead letter queue.

In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.

" + "documentation":"

Retrieves one or more messages (up to 10), from the specified queue. Using the WaitTimeSeconds parameter enables long-poll support. For more information, see Amazon SQS Long Polling in the Amazon Simple Queue Service Developer Guide.

Short poll is the default behavior where a weighted random set of machines is sampled on a ReceiveMessage call. Thus, only the messages on the sampled machines are returned. If the number of messages in the queue is small (fewer than 1,000), you most likely get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response. If this happens, repeat the request.

For each message returned, the response includes the following:

The receipt handle is the identifier you must provide when deleting the message. For more information, see Queue and Message Identifiers in the Amazon Simple Queue Service Developer Guide.

You can provide the VisibilityTimeout parameter in your request. The parameter is applied to the messages that Amazon SQS returns in the response. If you don't include the parameter, the overall visibility timeout for the queue is used for the returned messages. For more information, see Visibility Timeout in the Amazon Simple Queue Service Developer Guide.

A message that isn't deleted or a message whose visibility isn't extended before the visibility timeout expires counts as a failed receive. Depending on the configuration of the queue, the message might be sent to the dead-letter queue.

In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.

" }, "RemovePermission":{ "name":"RemovePermission", @@ -261,6 +274,24 @@ {"shape":"InvalidAttributeName"} ], "documentation":"

Sets the value of one or more queue attributes. When you change a queue's attributes, the change can take up to 60 seconds for most of the attributes to propagate throughout the Amazon SQS system. Changes made to the MessageRetentionPeriod attribute can take up to 15 minutes.

In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.

" + }, + "TagQueue":{ + "name":"TagQueue", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagQueueRequest"}, + "documentation":"

Add cost allocation tags to the specified Amazon SQS queue. For an overview, see Tagging Amazon SQS Queues in the Amazon Simple Queue Service Developer Guide.

When you use queue tags, keep the following guidelines in mind:

For a full list of tag restrictions, see Limits Related to Queues in the Amazon Simple Queue Service Developer Guide.

" + }, + "UntagQueue":{ + "name":"UntagQueue", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagQueueRequest"}, + "documentation":"

Remove cost allocation tags from the specified Amazon SQS queue. For an overview, see Tagging Amazon SQS Queues in the Amazon Simple Queue Service Developer Guide.

When you use queue tags, keep the following guidelines in mind:

For a full list of tag restrictions, see Limits Related to Queues in the Amazon Simple Queue Service Developer Guide.

" } }, "shapes":{ @@ -299,11 +330,11 @@ }, "AWSAccountIds":{ "shape":"AWSAccountIdList", - "documentation":"

The AWS account number of the principal who is given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification, see Your AWS Identifiers in the Amazon SQS Developer Guide.

" + "documentation":"

The AWS account number of the principal who is given permission. The principal must have an AWS account, but does not need to be signed up for Amazon SQS. For information about locating the AWS account identification, see Your AWS Identifiers in the Amazon Simple Queue Service Developer Guide.

" }, "Actions":{ "shape":"ActionNameList", - "documentation":"

The action the client wants to allow for the specified principal. The following values are valid:

For more information about these actions, see Understanding Permissions in the Amazon SQS Developer Guide.

Specifying SendMessage, DeleteMessage, or ChangeMessageVisibility for ActionName.n also grants permissions for the corresponding batch versions of those actions: SendMessageBatch, DeleteMessageBatch, and ChangeMessageVisibilityBatch.

" + "documentation":"

The action the client wants to allow for the specified principal. The following values are valid:

For more information about these actions, see Understanding Permissions in the Amazon Simple Queue Service Developer Guide.

Specifying SendMessage, DeleteMessage, or ChangeMessageVisibility for ActionName.n also grants permissions for the corresponding batch versions of those actions: SendMessageBatch, DeleteMessageBatch, and ChangeMessageVisibilityBatch.

" } }, "documentation":"

" @@ -501,7 +532,7 @@ }, "Attributes":{ "shape":"QueueAttributeMap", - "documentation":"

A map of attributes with their corresponding values.

The following lists the names, descriptions, and values of the special request parameters that the CreateQueue action uses:

The following attributes apply only to server-side-encryption:

The following attributes apply only to FIFO (first-in-first-out) queues:

Any other valid special request parameters (such as the following) are ignored:

", + "documentation":"

A map of attributes with their corresponding values.

The following lists the names, descriptions, and values of the special request parameters that the CreateQueue action uses:

The following attributes apply only to server-side-encryption:

The following attributes apply only to FIFO (first-in-first-out) queues:

Any other valid special request parameters (such as the following) are ignored:

", "locationName":"Attribute" } }, @@ -649,7 +680,7 @@ }, "AttributeNames":{ "shape":"AttributeNameList", - "documentation":"

A list of attributes for which to retrieve information.

In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.

The following attributes are supported:

The following attributes apply only to server-side-encryption:

The following attributes apply only to FIFO (first-in-first-out) queues:

" + "documentation":"

A list of attributes for which to retrieve information.

In the future, new attributes might be added. If you write code that calls this action, we recommend that you structure your code so that it can handle new attributes gracefully.

The following attributes are supported:

The following attributes apply only to server-side-encryption:

The following attributes apply only to FIFO (first-in-first-out) queues:

" } }, "documentation":"

" @@ -688,7 +719,7 @@ "documentation":"

The URL of the queue.

" } }, - "documentation":"

For more information, see Responses in the Amazon SQS Developer Guide.

" + "documentation":"

For more information, see Responses in the Amazon Simple Queue Service Developer Guide.

" }, "Integer":{"type":"integer"}, "InvalidAttributeName":{ @@ -730,7 +761,7 @@ "members":{ "QueueUrl":{ "shape":"String", - "documentation":"

The URL of a dead letter queue.

Queue URLs are case-sensitive.

" + "documentation":"

The URL of a dead-letter queue.

Queue URLs are case-sensitive.

" } }, "documentation":"

" @@ -741,11 +772,31 @@ "members":{ "queueUrls":{ "shape":"QueueUrlList", - "documentation":"

A list of source queue URLs that have the RedrivePolicy queue attribute configured with a dead letter queue.

" + "documentation":"

A list of source queue URLs that have the RedrivePolicy queue attribute configured with a dead-letter queue.

" } }, "documentation":"

A list of your dead letter source queues.

" }, + "ListQueueTagsRequest":{ + "type":"structure", + "required":["QueueUrl"], + "members":{ + "QueueUrl":{ + "shape":"String", + "documentation":"

The URL of the queue.

" + } + } + }, + "ListQueueTagsResult":{ + "type":"structure", + "members":{ + "Tags":{ + "shape":"TagMap", + "documentation":"

The list of all tags added to the specified queue.

", + "locationName":"Tag" + } + } + }, "ListQueuesRequest":{ "type":"structure", "members":{ @@ -796,7 +847,7 @@ }, "MessageAttributes":{ "shape":"MessageBodyAttributeMap", - "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon SQS Developer Guide.

", + "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon Simple Queue Service Developer Guide.

", "locationName":"MessageAttribute" } }, @@ -837,7 +888,7 @@ }, "DataType":{ "shape":"String", - "documentation":"

Amazon SQS supports the following logical data types: String, Number, and Binary. For the Number data type, you must use StringValue.

You can also append custom labels. For more information, see Message Attribute Data Types and Validation in the Amazon SQS Developer Guide.

" + "documentation":"

Amazon SQS supports the following logical data types: String, Number, and Binary. For the Number data type, you must use StringValue.

You can also append custom labels. For more information, see Message Attribute Data Types and Validation in the Amazon Simple Queue Service Developer Guide.

" } }, "documentation":"

The user-specified message attribute value. For string data types, the Value attribute has the same restrictions on the content as the message body. For more information, see SendMessage.

Name, type, value and the message body must not be empty or null. All parts of the message attribute, including Name, Type, and Value, are part of the message size restriction (256 KB or 262,144 bytes).

" @@ -1047,7 +1098,7 @@ }, "WaitTimeSeconds":{ "shape":"Integer", - "documentation":"

The duration (in seconds) for which the call waits for a message to arrive in the queue before returning. If a message is available, the call returns sooner than WaitTimeSeconds.

" + "documentation":"

The duration (in seconds) for which the call waits for a message to arrive in the queue before returning. If a message is available, the call returns sooner than WaitTimeSeconds. If no messages are available and the wait time expires, the call returns successfully with an empty list of messages.

" }, "ReceiveRequestAttemptId":{ "shape":"String", @@ -1123,12 +1174,12 @@ }, "MessageAttributes":{ "shape":"MessageBodyAttributeMap", - "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon SQS Developer Guide.

", + "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon Simple Queue Service Developer Guide.

", "locationName":"MessageAttribute" }, "MessageDeduplicationId":{ "shape":"String", - "documentation":"

This parameter applies only to FIFO (first-in-first-out) queues.

The token used for deduplication of messages within a 5-minute minimum deduplication interval. If a message with a particular MessageDeduplicationId is sent successfully, subsequent messages with the same MessageDeduplicationId are accepted successfully but aren't delivered. For more information, see Exactly-Once Processing in the Amazon SQS Developer Guide.

The MessageDeduplicationId is available to the recipient of the message (this can be useful for troubleshooting delivery issues).

If a message is sent successfully but the acknowledgement is lost and the message is resent with the same MessageDeduplicationId after the deduplication interval, Amazon SQS can't detect duplicate messages.

The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~).

For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId Property in the Amazon Simple Queue Service Developer Guide.

" + "documentation":"

This parameter applies only to FIFO (first-in-first-out) queues.

The token used for deduplication of messages within a 5-minute minimum deduplication interval. If a message with a particular MessageDeduplicationId is sent successfully, subsequent messages with the same MessageDeduplicationId are accepted successfully but aren't delivered. For more information, see Exactly-Once Processing in the Amazon Simple Queue Service Developer Guide.

The MessageDeduplicationId is available to the recipient of the message (this can be useful for troubleshooting delivery issues).

If a message is sent successfully but the acknowledgement is lost and the message is resent with the same MessageDeduplicationId after the deduplication interval, Amazon SQS can't detect duplicate messages.

The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~).

For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId Property in the Amazon Simple Queue Service Developer Guide.

" }, "MessageGroupId":{ "shape":"String", @@ -1223,12 +1274,12 @@ }, "MessageAttributes":{ "shape":"MessageBodyAttributeMap", - "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon SQS Developer Guide.

", + "documentation":"

Each message attribute consists of a Name, Type, and Value. For more information, see Message Attribute Items and Validation in the Amazon Simple Queue Service Developer Guide.

", "locationName":"MessageAttribute" }, "MessageDeduplicationId":{ "shape":"String", - "documentation":"

This parameter applies only to FIFO (first-in-first-out) queues.

The token used for deduplication of sent messages. If a message with a particular MessageDeduplicationId is sent successfully, any messages sent with the same MessageDeduplicationId are accepted successfully but aren't delivered during the 5-minute deduplication interval. For more information, see Exactly-Once Processing in the Amazon SQS Developer Guide.

The MessageDeduplicationId is available to the recipient of the message (this can be useful for troubleshooting delivery issues).

If a message is sent successfully but the acknowledgement is lost and the message is resent with the same MessageDeduplicationId after the deduplication interval, Amazon SQS can't detect duplicate messages.

The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~).

For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId Property in the Amazon Simple Queue Service Developer Guide.

" + "documentation":"

This parameter applies only to FIFO (first-in-first-out) queues.

The token used for deduplication of sent messages. If a message with a particular MessageDeduplicationId is sent successfully, any messages sent with the same MessageDeduplicationId are accepted successfully but aren't delivered during the 5-minute deduplication interval. For more information, see Exactly-Once Processing in the Amazon Simple Queue Service Developer Guide.

The MessageDeduplicationId is available to the recipient of the message (this can be useful for troubleshooting delivery issues).

If a message is sent successfully but the acknowledgement is lost and the message is resent with the same MessageDeduplicationId after the deduplication interval, Amazon SQS can't detect duplicate messages.

The length of MessageDeduplicationId is 128 characters. MessageDeduplicationId can contain alphanumeric characters (a-z, A-Z, 0-9) and punctuation (!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~).

For best practices of using MessageDeduplicationId, see Using the MessageDeduplicationId Property in the Amazon Simple Queue Service Developer Guide.

" }, "MessageGroupId":{ "shape":"String", @@ -1250,7 +1301,7 @@ }, "MessageId":{ "shape":"String", - "documentation":"

An attribute containing the MessageId of the message sent to the queue. For more information, see Queue and Message Identifiers in the Amazon SQS Developer Guide.

" + "documentation":"

An attribute containing the MessageId of the message sent to the queue. For more information, see Queue and Message Identifiers in the Amazon Simple Queue Service Developer Guide.

" }, "SequenceNumber":{ "shape":"String", @@ -1272,7 +1323,7 @@ }, "Attributes":{ "shape":"QueueAttributeMap", - "documentation":"

A map of attributes to set.

The following lists the names, descriptions, and values of the special request parameters that the SetQueueAttributes action uses:

The following attributes apply only to server-side-encryption:

The following attribute applies only to FIFO (first-in-first-out) queues:

Any other valid special request parameters (such as the following) are ignored:

", + "documentation":"

A map of attributes to set.

The following lists the names, descriptions, and values of the special request parameters that the SetQueueAttributes action uses:

The following attributes apply only to server-side-encryption:

The following attribute applies only to FIFO (first-in-first-out) queues:

Any other valid special request parameters (such as the following) are ignored:

", "locationName":"Attribute" } }, @@ -1286,6 +1337,46 @@ "locationName":"StringListValue" } }, + "TagKey":{"type":"string"}, + "TagKeyList":{ + "type":"list", + "member":{ + "shape":"TagKey", + "locationName":"TagKey" + }, + "flattened":true + }, + "TagMap":{ + "type":"map", + "key":{ + "shape":"TagKey", + "locationName":"Key" + }, + "value":{ + "shape":"TagValue", + "locationName":"Value" + }, + "flattened":true, + "locationName":"Tag" + }, + "TagQueueRequest":{ + "type":"structure", + "required":[ + "QueueUrl", + "Tags" + ], + "members":{ + "QueueUrl":{ + "shape":"String", + "documentation":"

The URL of the queue.

" + }, + "Tags":{ + "shape":"TagMap", + "documentation":"

The list of tags to be added to the specified queue.

" + } + } + }, + "TagValue":{"type":"string"}, "TooManyEntriesInBatchRequest":{ "type":"structure", "members":{ @@ -1309,7 +1400,24 @@ "senderFault":true }, "exception":true + }, + "UntagQueueRequest":{ + "type":"structure", + "required":[ + "QueueUrl", + "TagKeys" + ], + "members":{ + "QueueUrl":{ + "shape":"String", + "documentation":"

The URL of the queue.

" + }, + "TagKeys":{ + "shape":"TagKeyList", + "documentation":"

The list of tags to be removed from the specified queue.

" + } + } } }, - "documentation":"

Welcome to the Amazon Simple Queue Service API Reference.

Amazon Simple Queue Service (Amazon SQS) is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components.

Standard queues are available in all regions. FIFO queues are available in US West (Oregon) and US East (Ohio).

You can use AWS SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically:

Additional Information

" + "documentation":"

Welcome to the Amazon Simple Queue Service API Reference.

Amazon Simple Queue Service (Amazon SQS) is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components.

Standard queues are available in all regions. FIFO queues are available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions.

You can use AWS SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically:

Additional Information

" } diff --git a/services/ssm/src/main/resources/codegen-resources/service-2.json b/services/ssm/src/main/resources/codegen-resources/service-2.json index 5e2460814b31..4ff5576ce769 100644 --- a/services/ssm/src/main/resources/codegen-resources/service-2.json +++ b/services/ssm/src/main/resources/codegen-resources/service-2.json @@ -26,7 +26,7 @@ {"shape":"InternalServerError"}, {"shape":"TooManyTagsError"} ], - "documentation":"

Adds or overwrites one or more tags for the specified resource. Tags are metadata that you assign to your managed instances, Maintenance Windows, or Parameter Store parameters. Tags enable you to categorize your resources in different ways, for example, by purpose, owner, or environment. Each tag consists of a key and an optional value, both of which you define. For example, you could define a set of tags for your account's managed instances that helps you track each instance's owner and stack level. For example: Key=Owner and Value=DbAdmin, SysAdmin, or Dev. Or Key=Stack and Value=Production, Pre-Production, or Test.

Each resource can have a maximum of 10 tags.

We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add. Tags don't have any semantic meaning to Amazon EC2 and are interpreted strictly as a string of characters.

For more information about tags, see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide.

" + "documentation":"

Adds or overwrites one or more tags for the specified resource. Tags are metadata that you can assign to your documents, managed instances, Maintenance Windows, Parameter Store parameters, and patch baselines. Tags enable you to categorize your resources in different ways, for example, by purpose, owner, or environment. Each tag consists of a key and an optional value, both of which you define. For example, you could define a set of tags for your account's managed instances that helps you track each instance's owner and stack level. For example: Key=Owner and Value=DbAdmin, SysAdmin, or Dev. Or Key=Stack and Value=Production, Pre-Production, or Test.

Each resource can have a maximum of 10 tags.

We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add. Tags don't have any semantic meaning to Amazon EC2 and are interpreted strictly as a string of characters.

For more information about tags, see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide.

" }, "CancelCommand":{ "name":"CancelCommand", @@ -151,6 +151,22 @@ ], "documentation":"

Creates a patch baseline.

" }, + "CreateResourceDataSync":{ + "name":"CreateResourceDataSync", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateResourceDataSyncRequest"}, + "output":{"shape":"CreateResourceDataSyncResult"}, + "errors":[ + {"shape":"InternalServerError"}, + {"shape":"ResourceDataSyncCountExceededException"}, + {"shape":"ResourceDataSyncAlreadyExistsException"}, + {"shape":"ResourceDataSyncInvalidConfigurationException"} + ], + "documentation":"

Creates a resource data sync configuration to a single bucket in Amazon S3. This is an asynchronous operation that returns immediately. After a successful initial sync is completed, the system continuously syncs data to the Amazon S3 bucket. To check the status of the sync, use the ListResourceDataSync.

By default, data is not encrypted in Amazon S3. We strongly recommend that you enable encryption in Amazon S3 to ensure secure data storage. We also recommend that you secure access to the Amazon S3 bucket by creating a restrictive bucket policy. To view an example of a restrictive Amazon S3 bucket policy for Resource Data Sync, see Configuring Resource Data Sync for Inventory.

" + }, "DeleteActivation":{ "name":"DeleteActivation", "http":{ @@ -162,7 +178,8 @@ "errors":[ {"shape":"InvalidActivationId"}, {"shape":"InvalidActivation"}, - {"shape":"InternalServerError"} + {"shape":"InternalServerError"}, + {"shape":"TooManyUpdates"} ], "documentation":"

Deletes an activation. You are not required to delete an activation. If you delete an activation, you can no longer use it to register additional managed instances. Deleting an activation does not de-register managed instances. You must manually de-register managed instances.

" }, @@ -237,7 +254,7 @@ "errors":[ {"shape":"InternalServerError"} ], - "documentation":"

Delete a list of parameters.

" + "documentation":"

Delete a list of parameters. This API is used to delete parameters by using the Amazon EC2 console.

" }, "DeletePatchBaseline":{ "name":"DeletePatchBaseline", @@ -253,6 +270,20 @@ ], "documentation":"

Deletes a patch baseline.

" }, + "DeleteResourceDataSync":{ + "name":"DeleteResourceDataSync", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteResourceDataSyncRequest"}, + "output":{"shape":"DeleteResourceDataSyncResult"}, + "errors":[ + {"shape":"InternalServerError"}, + {"shape":"ResourceDataSyncNotFoundException"} + ], + "documentation":"

Deletes a Resource Data Sync configuration. After the configuration is deleted, changes to inventory data on managed instances are no longer synced with the target Amazon S3 bucket. Deleting a sync configuration does not delete data in the target Amazon S3 bucket.

" + }, "DeregisterManagedInstance":{ "name":"DeregisterManagedInstance", "http":{ @@ -291,7 +322,8 @@ "output":{"shape":"DeregisterTargetFromMaintenanceWindowResult"}, "errors":[ {"shape":"DoesNotExistException"}, - {"shape":"InternalServerError"} + {"shape":"InternalServerError"}, + {"shape":"TargetInUseException"} ], "documentation":"

Removes a target from a Maintenance Window.

" }, @@ -334,11 +366,12 @@ "output":{"shape":"DescribeAssociationResult"}, "errors":[ {"shape":"AssociationDoesNotExist"}, + {"shape":"InvalidAssociationVersion"}, {"shape":"InternalServerError"}, {"shape":"InvalidDocument"}, {"shape":"InvalidInstanceId"} ], - "documentation":"

Describes the associations for the specified Systems Manager document or instance.

" + "documentation":"

Describes the association for the specified target or instance. If you created the association by using the Targets parameter, then you must retrieve the association by using the association ID. If you created the association by specifying an instance ID and a Systems Manager document, then you retrieve the association by specifying the document name and the instance ID.

" }, "DescribeAutomationExecutions":{ "name":"DescribeAutomationExecutions", @@ -380,7 +413,7 @@ {"shape":"InvalidDocument"}, {"shape":"InvalidDocumentVersion"} ], - "documentation":"

Describes the specified SSM document.

" + "documentation":"

Describes the specified Systems Manager document.

" }, "DescribeDocumentPermission":{ "name":"DescribeDocumentPermission", @@ -423,9 +456,10 @@ "errors":[ {"shape":"InvalidResourceId"}, {"shape":"DoesNotExistException"}, + {"shape":"UnsupportedOperatingSystem"}, {"shape":"InternalServerError"} ], - "documentation":"

Retrieves the current effective patches (the patch and the approval state) for the specified patch baseline.

" + "documentation":"

Retrieves the current effective patches (the patch and the approval state) for the specified patch baseline. Note that this API applies only to Windows patch baselines.

" }, "DescribeInstanceAssociationsStatus":{ "name":"DescribeInstanceAssociationsStatus", @@ -543,7 +577,7 @@ "errors":[ {"shape":"InternalServerError"} ], - "documentation":"

Lists the executions of a Maintenance Window (meaning, information about when the Maintenance Window was scheduled to be active and information about tasks registered and run with the Maintenance Window).

" + "documentation":"

Lists the executions of a Maintenance Window. This includes information about when the Maintenance Window was scheduled to be active, and information about tasks registered and run with the Maintenance Window.

" }, "DescribeMaintenanceWindowTargets":{ "name":"DescribeMaintenanceWindowTargets", @@ -601,7 +635,7 @@ {"shape":"InvalidFilterValue"}, {"shape":"InvalidNextToken"} ], - "documentation":"

Get information about a parameter.

" + "documentation":"

Get information about a parameter.

Request results are returned on a best-effort basis. If you specify MaxResults in the request, the response includes information up to the limit specified. The number of items returned, however, can be between zero and the value of MaxResults. If the service reaches an internal limit while processing the results, it stops the operation and returns the matching values up to that point and a NextToken. You can specify the NextToken in a subsequent call to get the next set of results.

" }, "DescribePatchBaselines":{ "name":"DescribePatchBaselines", @@ -685,7 +719,7 @@ "errors":[ {"shape":"InternalServerError"} ], - "documentation":"

Retrieves the default patch baseline.

" + "documentation":"

Retrieves the default patch baseline. Note that Systems Manager supports creating multiple default patch baselines. For example, you can create a default patch baseline for each operating system.

" }, "GetDeployablePatchSnapshotForInstance":{ "name":"GetDeployablePatchSnapshotForInstance", @@ -696,9 +730,10 @@ "input":{"shape":"GetDeployablePatchSnapshotForInstanceRequest"}, "output":{"shape":"GetDeployablePatchSnapshotForInstanceResult"}, "errors":[ - {"shape":"InternalServerError"} + {"shape":"InternalServerError"}, + {"shape":"UnsupportedOperatingSystem"} ], - "documentation":"

Retrieves the current snapshot for the patch baseline the instance uses. This API is primarily used by the AWS-ApplyPatchBaseline Systems Manager document.

" + "documentation":"

Retrieves the current snapshot for the patch baseline the instance uses. This API is primarily used by the AWS-RunPatchBaseline Systems Manager document.

" }, "GetDocument":{ "name":"GetDocument", @@ -713,7 +748,7 @@ {"shape":"InvalidDocument"}, {"shape":"InvalidDocumentVersion"} ], - "documentation":"

Gets the contents of the specified SSM document.

" + "documentation":"

Gets the contents of the specified Systems Manager document.

" }, "GetInventory":{ "name":"GetInventory", @@ -789,6 +824,34 @@ ], "documentation":"

Retrieves the details about a specific task executed as part of a Maintenance Window execution.

" }, + "GetMaintenanceWindowExecutionTaskInvocation":{ + "name":"GetMaintenanceWindowExecutionTaskInvocation", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetMaintenanceWindowExecutionTaskInvocationRequest"}, + "output":{"shape":"GetMaintenanceWindowExecutionTaskInvocationResult"}, + "errors":[ + {"shape":"DoesNotExistException"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Retrieves a task invocation. A task invocation is a specific task executing on a specific target. Maintenance Windows report status for all invocations.

" + }, + "GetMaintenanceWindowTask":{ + "name":"GetMaintenanceWindowTask", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetMaintenanceWindowTaskRequest"}, + "output":{"shape":"GetMaintenanceWindowTaskResult"}, + "errors":[ + {"shape":"DoesNotExistException"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Lists the tasks in a Maintenance Window.

" + }, "GetParameter":{ "name":"GetParameter", "http":{ @@ -800,7 +863,8 @@ "errors":[ {"shape":"InternalServerError"}, {"shape":"InvalidKeyId"}, - {"shape":"ParameterNotFound"} + {"shape":"ParameterNotFound"}, + {"shape":"ParameterVersionNotFound"} ], "documentation":"

Get information about a parameter by using the parameter name.

" }, @@ -850,7 +914,7 @@ {"shape":"InvalidKeyId"}, {"shape":"InvalidNextToken"} ], - "documentation":"

Retrieve parameters in a specific hierarchy. For more information, see Using Parameter Hierarchies.

" + "documentation":"

Retrieve parameters in a specific hierarchy. For more information, see Working with Systems Manager Parameters.

Request results are returned on a best-effort basis. If you specify MaxResults in the request, the response includes information up to the limit specified. The number of items returned, however, can be between zero and the value of MaxResults. If the service reaches an internal limit while processing the results, it stops the operation and returns the matching values up to that point and a NextToken. You can specify the NextToken in a subsequent call to get the next set of results.

" }, "GetPatchBaseline":{ "name":"GetPatchBaseline", @@ -880,6 +944,21 @@ ], "documentation":"

Retrieves the patch baseline that should be used for the specified patch group.

" }, + "ListAssociationVersions":{ + "name":"ListAssociationVersions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListAssociationVersionsRequest"}, + "output":{"shape":"ListAssociationVersionsResult"}, + "errors":[ + {"shape":"InternalServerError"}, + {"shape":"InvalidNextToken"}, + {"shape":"AssociationDoesNotExist"} + ], + "documentation":"

Retrieves all versions of an association for a specific association ID.

" + }, "ListAssociations":{ "name":"ListAssociations", "http":{ @@ -928,6 +1007,38 @@ ], "documentation":"

Lists the commands requested by users of the AWS account.

" }, + "ListComplianceItems":{ + "name":"ListComplianceItems", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListComplianceItemsRequest"}, + "output":{"shape":"ListComplianceItemsResult"}, + "errors":[ + {"shape":"InvalidResourceType"}, + {"shape":"InvalidResourceId"}, + {"shape":"InternalServerError"}, + {"shape":"InvalidFilter"}, + {"shape":"InvalidNextToken"} + ], + "documentation":"

For a specified resource ID, this API action returns a list of compliance statuses for different resource types. Currently, you can only specify one resource ID per call. List results depend on the criteria specified in the filter.

" + }, + "ListComplianceSummaries":{ + "name":"ListComplianceSummaries", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListComplianceSummariesRequest"}, + "output":{"shape":"ListComplianceSummariesResult"}, + "errors":[ + {"shape":"InvalidFilter"}, + {"shape":"InvalidNextToken"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Returns a summary count of compliant and non-compliant resources for a compliance type. For example, this call can return State Manager associations, patches, or custom compliance types according to the filter criteria that you specify.

" + }, "ListDocumentVersions":{ "name":"ListDocumentVersions", "http":{ @@ -956,7 +1067,7 @@ {"shape":"InvalidNextToken"}, {"shape":"InvalidFilterKey"} ], - "documentation":"

Describes one or more of your SSM documents.

" + "documentation":"

Describes one or more of your Systems Manager documents.

" }, "ListInventoryEntries":{ "name":"ListInventoryEntries", @@ -975,6 +1086,35 @@ ], "documentation":"

A list of inventory items returned by the request.

" }, + "ListResourceComplianceSummaries":{ + "name":"ListResourceComplianceSummaries", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListResourceComplianceSummariesRequest"}, + "output":{"shape":"ListResourceComplianceSummariesResult"}, + "errors":[ + {"shape":"InvalidFilter"}, + {"shape":"InvalidNextToken"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Returns a resource-level summary count. The summary includes information about compliant and non-compliant statuses and detailed compliance-item severity counts, according to the filter criteria you specify.

" + }, + "ListResourceDataSync":{ + "name":"ListResourceDataSync", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListResourceDataSyncRequest"}, + "output":{"shape":"ListResourceDataSyncResult"}, + "errors":[ + {"shape":"InternalServerError"}, + {"shape":"InvalidNextToken"} + ], + "documentation":"

Lists your resource data sync configurations. Includes information about the last time a sync attempted to start, the last sync status, and the last time a sync successfully completed.

The number of sync configurations might be too large to return using a single call to ListResourceDataSync. You can limit the number of sync configurations returned by using the MaxResults parameter. To determine whether there are more sync configurations to list, check the value of NextToken in the output. If there are more sync configurations to list, you can request them by specifying the NextToken returned in the call to the parameter of a subsequent call.

" + }, "ListTagsForResource":{ "name":"ListTagsForResource", "http":{ @@ -1007,6 +1147,25 @@ ], "documentation":"

Shares a Systems Manager document publicly or privately. If you share a document privately, you must specify the AWS user account IDs for those people who can use the document. If you share a document publicly, you must specify All as the account ID.

" }, + "PutComplianceItems":{ + "name":"PutComplianceItems", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutComplianceItemsRequest"}, + "output":{"shape":"PutComplianceItemsResult"}, + "errors":[ + {"shape":"InternalServerError"}, + {"shape":"InvalidItemContentException"}, + {"shape":"TotalSizeLimitExceededException"}, + {"shape":"ItemSizeLimitExceededException"}, + {"shape":"ComplianceTypeCountLimitExceededException"}, + {"shape":"InvalidResourceType"}, + {"shape":"InvalidResourceId"} + ], + "documentation":"

Registers a compliance type and other compliance details on a designated resource. This action lets you register custom compliance details with a resource. This call overwrites existing compliance information on the resource, so you must provide a full list of compliance items each time that you send the request.

" + }, "PutInventory":{ "name":"PutInventory", "http":{ @@ -1024,7 +1183,10 @@ {"shape":"ItemSizeLimitExceededException"}, {"shape":"ItemContentMismatchException"}, {"shape":"CustomSchemaCountLimitExceededException"}, - {"shape":"UnsupportedInventorySchemaVersionException"} + {"shape":"UnsupportedInventorySchemaVersionException"}, + {"shape":"UnsupportedInventoryItemContextException"}, + {"shape":"InvalidInventoryItemContextException"}, + {"shape":"SubTypeCountLimitExceededException"} ], "documentation":"

Bulk update custom inventory items on one more instance. The request adds an inventory item, if it doesn't already exist, or updates an inventory item, if it does exist.

" }, @@ -1045,6 +1207,7 @@ {"shape":"HierarchyLevelLimitExceededException"}, {"shape":"HierarchyTypeMismatchException"}, {"shape":"InvalidAllowedPatternException"}, + {"shape":"ParameterMaxVersionLimitExceeded"}, {"shape":"ParameterPatternMismatchException"}, {"shape":"UnsupportedParameterType"} ], @@ -1110,6 +1273,7 @@ {"shape":"IdempotentParameterMismatch"}, {"shape":"DoesNotExistException"}, {"shape":"ResourceLimitExceededException"}, + {"shape":"FeatureNotAvailableException"}, {"shape":"InternalServerError"} ], "documentation":"

Adds a new task to a Maintenance Window.

" @@ -1129,6 +1293,21 @@ ], "documentation":"

Removes all tags from the specified resource.

" }, + "SendAutomationSignal":{ + "name":"SendAutomationSignal", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"SendAutomationSignalRequest"}, + "output":{"shape":"SendAutomationSignalResult"}, + "errors":[ + {"shape":"AutomationExecutionNotFoundException"}, + {"shape":"InvalidAutomationSignalException"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Sends a signal to an Automation execution to change the current behavior or status of the execution.

" + }, "SendCommand":{ "name":"SendCommand", "http":{ @@ -1164,6 +1343,7 @@ {"shape":"InvalidAutomationExecutionParametersException"}, {"shape":"AutomationExecutionLimitExceededException"}, {"shape":"AutomationDefinitionVersionNotFoundException"}, + {"shape":"IdempotentParameterMismatch"}, {"shape":"InternalServerError"} ], "documentation":"

Initiates execution of an Automation document.

" @@ -1200,9 +1380,11 @@ {"shape":"InvalidUpdate"}, {"shape":"TooManyUpdates"}, {"shape":"InvalidDocument"}, - {"shape":"InvalidTarget"} + {"shape":"InvalidTarget"}, + {"shape":"InvalidAssociationVersion"}, + {"shape":"AssociationVersionLimitExceeded"} ], - "documentation":"

Updates an association. You can only update the document version, schedule, parameters, and Amazon S3 output of an association.

" + "documentation":"

Updates an association. You can update the association name and version, the document version, schedule, parameters, and Amazon S3 output.

" }, "UpdateAssociationStatus":{ "name":"UpdateAssociationStatus", @@ -1272,6 +1454,34 @@ ], "documentation":"

Updates an existing Maintenance Window. Only specified parameters are modified.

" }, + "UpdateMaintenanceWindowTarget":{ + "name":"UpdateMaintenanceWindowTarget", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateMaintenanceWindowTargetRequest"}, + "output":{"shape":"UpdateMaintenanceWindowTargetResult"}, + "errors":[ + {"shape":"DoesNotExistException"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Modifies the target of an existing Maintenance Window. You can't change the target type, but you can change the following:

The target from being an ID target to a Tag target, or a Tag target to an ID target.

IDs for an ID target.

Tags for a Tag target.

Owner.

Name.

Description.

If a parameter is null, then the corresponding field is not modified.

" + }, + "UpdateMaintenanceWindowTask":{ + "name":"UpdateMaintenanceWindowTask", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateMaintenanceWindowTaskRequest"}, + "output":{"shape":"UpdateMaintenanceWindowTaskResult"}, + "errors":[ + {"shape":"DoesNotExistException"}, + {"shape":"InternalServerError"} + ], + "documentation":"

Modifies a task assigned to a Maintenance Window. You can't change the task type, but you can change the following values:

Task ARN. For example, you can change a RUN_COMMAND task from AWS-RunPowerShellScript to AWS-RunShellScript.

Service role ARN.

Task parameters.

Task priority.

Task MaxConcurrency and MaxErrors.

Log location.

If a parameter is null, then the corresponding field is not modified. Also, if you set Replace to true, then all fields required by the RegisterTaskWithMaintenanceWindow action are required for this request. Optional fields that aren't specified are set to null.

" + }, "UpdateManagedInstanceRole":{ "name":"UpdateManagedInstanceRole", "http":{ @@ -1308,10 +1518,7 @@ }, "AccountIdList":{ "type":"list", - "member":{ - "shape":"AccountId", - "locationName":"AccountId" - }, + "member":{"shape":"AccountId"}, "max":20 }, "Activation":{ @@ -1388,7 +1595,7 @@ }, "ResourceId":{ "shape":"ResourceId", - "documentation":"

The resource ID you want to tag.

" + "documentation":"

The resource ID you want to tag.

For the ManagedInstance, MaintenanceWindow, and PatchBaseline values, use the ID of the resource, such as mw-01234361858c9b57b for a Maintenance Window.

For the Document and Parameter values, use the name of the resource.

" }, "Tags":{ "shape":"TagList", @@ -1405,6 +1612,7 @@ "type":"string", "max":10 }, + "AggregatorSchemaOnly":{"type":"boolean"}, "AllowedPattern":{ "type":"string", "max":1024, @@ -1435,7 +1643,7 @@ "members":{ "Name":{ "shape":"DocumentName", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "InstanceId":{ "shape":"InstanceId", @@ -1445,6 +1653,10 @@ "shape":"AssociationId", "documentation":"

The ID created by the system when you create an association. An association is a binding between a document and a set of targets with a schedule.

" }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

The association version.

" + }, "DocumentVersion":{ "shape":"DocumentVersion", "documentation":"

The version of the document used in the association.

" @@ -1464,6 +1676,10 @@ "ScheduleExpression":{ "shape":"ScheduleExpression", "documentation":"

A cron expression that specifies a schedule when the association runs.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

The association name.

" } }, "documentation":"

Describes an association of a Systems Manager document and an instance.

" @@ -1480,12 +1696,16 @@ "members":{ "Name":{ "shape":"DocumentName", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "InstanceId":{ "shape":"InstanceId", "documentation":"

The ID of the instance.

" }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

The association version.

" + }, "Date":{ "shape":"DateTime", "documentation":"

The date when the association was made.

" @@ -1533,16 +1753,17 @@ "LastSuccessfulExecutionDate":{ "shape":"DateTime", "documentation":"

The last date on which the association was successfully run.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

The association name.

" } }, "documentation":"

Describes the parameters for a document.

" }, "AssociationDescriptionList":{ "type":"list", - "member":{ - "shape":"AssociationDescription", - "locationName":"AssociationDescription" - } + "member":{"shape":"AssociationDescription"} }, "AssociationDoesNotExist":{ "type":"structure", @@ -1578,15 +1799,13 @@ "AssociationId", "AssociationStatusName", "LastExecutedBefore", - "LastExecutedAfter" + "LastExecutedAfter", + "AssociationName" ] }, "AssociationFilterList":{ "type":"list", - "member":{ - "shape":"AssociationFilter", - "locationName":"AssociationFilter" - }, + "member":{"shape":"AssociationFilter"}, "min":1 }, "AssociationFilterValue":{ @@ -1606,10 +1825,11 @@ }, "AssociationList":{ "type":"list", - "member":{ - "shape":"Association", - "locationName":"Association" - } + "member":{"shape":"Association"} + }, + "AssociationName":{ + "type":"string", + "pattern":"^[a-zA-Z0-9_\\-.]{3,128}$" }, "AssociationOverview":{ "type":"structure", @@ -1669,6 +1889,69 @@ "Failed" ] }, + "AssociationVersion":{ + "type":"string", + "pattern":"([$]LATEST)|([1-9][0-9]*)" + }, + "AssociationVersionInfo":{ + "type":"structure", + "members":{ + "AssociationId":{ + "shape":"AssociationId", + "documentation":"

The ID created by the system when the association was created.

" + }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

The association version.

" + }, + "CreatedDate":{ + "shape":"DateTime", + "documentation":"

The date the association version was created.

" + }, + "Name":{ + "shape":"DocumentName", + "documentation":"

The name specified when the association was created.

" + }, + "DocumentVersion":{ + "shape":"DocumentVersion", + "documentation":"

The version of a Systems Manager document used when the association version was created.

" + }, + "Parameters":{ + "shape":"Parameters", + "documentation":"

Parameters specified when the association version was created.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The targets specified for the association when the association version was created.

" + }, + "ScheduleExpression":{ + "shape":"ScheduleExpression", + "documentation":"

The cron or rate schedule specified for the association when the association version was created.

" + }, + "OutputLocation":{ + "shape":"InstanceAssociationOutputLocation", + "documentation":"

The location in Amazon S3 specified for the association when the association version was created.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

The name specified for the association version when the association version was created.

" + } + }, + "documentation":"

Information about the association version.

" + }, + "AssociationVersionLimitExceeded":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You have reached the maximum number versions allowed for an association. Each association has a limit of 1,000 versions.

", + "exception":true + }, + "AssociationVersionList":{ + "type":"list", + "member":{"shape":"AssociationVersionInfo"}, + "min":1 + }, "AttributeName":{ "type":"string", "max":64, @@ -1676,7 +1959,7 @@ }, "AttributeValue":{ "type":"string", - "max":1024, + "max":4096, "min":0 }, "AutomationActionName":{ @@ -1861,6 +2144,7 @@ "enum":[ "Pending", "InProgress", + "Waiting", "Success", "TimedOut", "Cancelled", @@ -2241,67 +2525,324 @@ "max":100 }, "CompletedCount":{"type":"integer"}, - "ComputerName":{ + "ComplianceExecutionId":{ "type":"string", - "max":255, - "min":1 + "max":100 }, - "CreateActivationRequest":{ + "ComplianceExecutionSummary":{ "type":"structure", - "required":["IamRole"], + "required":["ExecutionTime"], "members":{ - "Description":{ - "shape":"ActivationDescription", - "documentation":"

A userdefined description of the resource that you want to register with Amazon EC2.

" - }, - "DefaultInstanceName":{ - "shape":"DefaultInstanceName", - "documentation":"

The name of the registered, managed instance as it will appear in the Amazon EC2 console or when you use the AWS command line tools to list EC2 resources.

" - }, - "IamRole":{ - "shape":"IamRole", - "documentation":"

The Amazon Identity and Access Management (IAM) role that you want to assign to the managed instance.

" + "ExecutionTime":{ + "shape":"DateTime", + "documentation":"

The time the execution ran as a datetime object that is saved in the following format: yyyy-MM-dd'T'HH:mm:ss'Z'.

" }, - "RegistrationLimit":{ - "shape":"RegistrationLimit", - "documentation":"

Specify the maximum number of managed instances you want to register. The default value is 1 instance.

", - "box":true + "ExecutionId":{ + "shape":"ComplianceExecutionId", + "documentation":"

An ID created by the system when PutComplianceItems was called. For example, CommandID is a valid execution ID. You can use this ID in subsequent calls.

" }, - "ExpirationDate":{ - "shape":"ExpirationDate", - "documentation":"

The date by which this activation request should expire. The default value is 24 hours.

" + "ExecutionType":{ + "shape":"ComplianceExecutionType", + "documentation":"

The type of execution. For example, Command is a valid execution type.

" } - } + }, + "documentation":"

A summary of the call execution that includes an execution ID, the type of execution (for example, Command), and the date/time of the execution using a datetime object that is saved in the following format: yyyy-MM-dd'T'HH:mm:ss'Z'.

" }, - "CreateActivationResult":{ + "ComplianceExecutionType":{ + "type":"string", + "max":50 + }, + "ComplianceFilterValue":{"type":"string"}, + "ComplianceItem":{ "type":"structure", "members":{ - "ActivationId":{ - "shape":"ActivationId", - "documentation":"

The ID number generated by the system when it processed the activation. The activation ID functions like a user name.

" + "ComplianceType":{ + "shape":"ComplianceTypeName", + "documentation":"

The compliance type. For example, Association (for a State Manager association), Patch, or Custom:string are all valid compliance types.

" }, - "ActivationCode":{ - "shape":"ActivationCode", - "documentation":"

The code the system generates when it processes the activation. The activation code functions like a password to validate the activation ID.

" + "ResourceType":{ + "shape":"ComplianceResourceType", + "documentation":"

The type of resource. ManagedInstance is currently the only supported resource type.

" + }, + "ResourceId":{ + "shape":"ComplianceResourceId", + "documentation":"

An ID for the resource. For a managed instance, this is the instance ID.

" + }, + "Id":{ + "shape":"ComplianceItemId", + "documentation":"

An ID for the compliance item. For example, if the compliance item is a Windows patch, the ID could be the number of the KB article. Here's an example: KB4010320.

" + }, + "Title":{ + "shape":"ComplianceItemTitle", + "documentation":"

A title for the compliance item. For example, if the compliance item is a Windows patch, the title could be the title of the KB article for the patch. Here's an example: Security Update for Active Directory Federation Services.

" + }, + "Status":{ + "shape":"ComplianceStatus", + "documentation":"

The status of the compliance item. An item is either COMPLIANT or NON_COMPLIANT.

" + }, + "Severity":{ + "shape":"ComplianceSeverity", + "documentation":"

The severity of the compliance status. Severity can be one of the following: Critical, High, Medium, Low, Informational, Unspecified.

" + }, + "ExecutionSummary":{ + "shape":"ComplianceExecutionSummary", + "documentation":"

A summary for the compliance item. The summary includes an execution ID, the execution type (for example, command), and the execution time.

" + }, + "Details":{ + "shape":"ComplianceItemDetails", + "documentation":"

A \"Key\": \"Value\" tag combination for the compliance item.

" } - } + }, + "documentation":"

Information about the compliance as defined by the resource type. For example, for a patch resource type, Items includes information about the PatchSeverity, Classification, etc.

" }, - "CreateAssociationBatchRequest":{ - "type":"structure", - "required":["Entries"], - "members":{ - "Entries":{ - "shape":"CreateAssociationBatchRequestEntries", - "documentation":"

One or more associations.

" - } - } + "ComplianceItemContentHash":{ + "type":"string", + "max":256 }, - "CreateAssociationBatchRequestEntries":{ + "ComplianceItemDetails":{ + "type":"map", + "key":{"shape":"AttributeName"}, + "value":{"shape":"AttributeValue"} + }, + "ComplianceItemEntry":{ + "type":"structure", + "required":[ + "Severity", + "Status" + ], + "members":{ + "Id":{ + "shape":"ComplianceItemId", + "documentation":"

The compliance item ID. For example, if the compliance item is a Windows patch, the ID could be the number of the KB article.

" + }, + "Title":{ + "shape":"ComplianceItemTitle", + "documentation":"

The title of the compliance item. For example, if the compliance item is a Windows patch, the title could be the title of the KB article for the patch. Here's an example: Security Update for Active Directory Federation Services.

" + }, + "Severity":{ + "shape":"ComplianceSeverity", + "documentation":"

The severity of the compliance status. Severity can be one of the following: Critical, High, Medium, Low, Informational, Unspecified.

" + }, + "Status":{ + "shape":"ComplianceStatus", + "documentation":"

The status of the compliance item. An item is either COMPLIANT or NON_COMPLIANT.

" + }, + "Details":{ + "shape":"ComplianceItemDetails", + "documentation":"

A \"Key\": \"Value\" tag combination for the compliance item.

" + } + }, + "documentation":"

Information about a compliance item.

" + }, + "ComplianceItemEntryList":{ + "type":"list", + "member":{"shape":"ComplianceItemEntry"}, + "max":10000, + "min":0 + }, + "ComplianceItemId":{ + "type":"string", + "max":100, + "min":1 + }, + "ComplianceItemList":{ + "type":"list", + "member":{"shape":"ComplianceItem"} + }, + "ComplianceItemTitle":{ + "type":"string", + "max":500 + }, + "ComplianceQueryOperatorType":{ + "type":"string", + "enum":[ + "EQUAL", + "NOT_EQUAL", + "BEGIN_WITH", + "LESS_THAN", + "GREATER_THAN" + ] + }, + "ComplianceResourceId":{ + "type":"string", + "max":100, + "min":1 + }, + "ComplianceResourceIdList":{ + "type":"list", + "member":{"shape":"ComplianceResourceId"}, + "min":1 + }, + "ComplianceResourceType":{ + "type":"string", + "max":50, + "min":1 + }, + "ComplianceResourceTypeList":{ + "type":"list", + "member":{"shape":"ComplianceResourceType"}, + "min":1 + }, + "ComplianceSeverity":{ + "type":"string", + "enum":[ + "CRITICAL", + "HIGH", + "MEDIUM", + "LOW", + "INFORMATIONAL", + "UNSPECIFIED" + ] + }, + "ComplianceStatus":{ + "type":"string", + "enum":[ + "COMPLIANT", + "NON_COMPLIANT" + ] + }, + "ComplianceStringFilter":{ + "type":"structure", + "members":{ + "Key":{ + "shape":"ComplianceStringFilterKey", + "documentation":"

The name of the filter.

" + }, + "Values":{ + "shape":"ComplianceStringFilterValueList", + "documentation":"

The value for which to search.

" + }, + "Type":{ + "shape":"ComplianceQueryOperatorType", + "documentation":"

The type of comparison that should be performed for the value: Equal, NotEqual, BeginWith, LessThan, or GreaterThan.

" + } + }, + "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + }, + "ComplianceStringFilterKey":{ + "type":"string", + "max":200, + "min":1 + }, + "ComplianceStringFilterList":{ + "type":"list", + "member":{"shape":"ComplianceStringFilter"} + }, + "ComplianceStringFilterValueList":{ + "type":"list", + "member":{"shape":"ComplianceFilterValue"}, + "max":20, + "min":1 + }, + "ComplianceSummaryCount":{"type":"integer"}, + "ComplianceSummaryItem":{ + "type":"structure", + "members":{ + "ComplianceType":{ + "shape":"ComplianceTypeName", + "documentation":"

The type of compliance item. For example, the compliance type can be Association, Patch, or Custom:string.

" + }, + "CompliantSummary":{ + "shape":"CompliantSummary", + "documentation":"

A list of COMPLIANT items for the specified compliance type.

" + }, + "NonCompliantSummary":{ + "shape":"NonCompliantSummary", + "documentation":"

A list of NON_COMPLIANT items for the specified compliance type.

" + } + }, + "documentation":"

A summary of compliance information by compliance type.

" + }, + "ComplianceSummaryItemList":{ "type":"list", - "member":{ - "shape":"CreateAssociationBatchRequestEntry", - "locationName":"entries" + "member":{"shape":"ComplianceSummaryItem"} + }, + "ComplianceTypeCountLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You specified too many custom compliance types. You can specify a maximum of 10 different types.

", + "exception":true + }, + "ComplianceTypeName":{ + "type":"string", + "max":100, + "min":1, + "pattern":"[A-Za-z0-9_\\-]\\w+|Custom:[a-zA-Z0-9_\\-]\\w+" + }, + "CompliantSummary":{ + "type":"structure", + "members":{ + "CompliantCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources that are compliant.

" + }, + "SeveritySummary":{ + "shape":"SeveritySummary", + "documentation":"

A summary of the compliance severity by compliance type.

" + } }, + "documentation":"

A summary of resources that are compliant. The summary is organized according to the resource count for each compliance type.

" + }, + "ComputerName":{ + "type":"string", + "max":255, + "min":1 + }, + "CreateActivationRequest":{ + "type":"structure", + "required":["IamRole"], + "members":{ + "Description":{ + "shape":"ActivationDescription", + "documentation":"

A userdefined description of the resource that you want to register with Amazon EC2.

" + }, + "DefaultInstanceName":{ + "shape":"DefaultInstanceName", + "documentation":"

The name of the registered, managed instance as it will appear in the Amazon EC2 console or when you use the AWS command line tools to list EC2 resources.

" + }, + "IamRole":{ + "shape":"IamRole", + "documentation":"

The Amazon Identity and Access Management (IAM) role that you want to assign to the managed instance.

" + }, + "RegistrationLimit":{ + "shape":"RegistrationLimit", + "documentation":"

Specify the maximum number of managed instances you want to register. The default value is 1 instance.

", + "box":true + }, + "ExpirationDate":{ + "shape":"ExpirationDate", + "documentation":"

The date by which this activation request should expire. The default value is 24 hours.

" + } + } + }, + "CreateActivationResult":{ + "type":"structure", + "members":{ + "ActivationId":{ + "shape":"ActivationId", + "documentation":"

The ID number generated by the system when it processed the activation. The activation ID functions like a user name.

" + }, + "ActivationCode":{ + "shape":"ActivationCode", + "documentation":"

The code the system generates when it processes the activation. The activation code functions like a password to validate the activation ID.

" + } + } + }, + "CreateAssociationBatchRequest":{ + "type":"structure", + "required":["Entries"], + "members":{ + "Entries":{ + "shape":"CreateAssociationBatchRequestEntries", + "documentation":"

One or more associations.

" + } + } + }, + "CreateAssociationBatchRequestEntries":{ + "type":"list", + "member":{"shape":"CreateAssociationBatchRequestEntry"}, "min":1 }, "CreateAssociationBatchRequestEntry":{ @@ -2335,6 +2876,10 @@ "OutputLocation":{ "shape":"InstanceAssociationOutputLocation", "documentation":"

An Amazon S3 bucket where you want to store the results of this request.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

Specify a descriptive name for the association.

" } }, "documentation":"

Describes the association of a Systems Manager document and an instance.

" @@ -2383,6 +2928,10 @@ "OutputLocation":{ "shape":"InstanceAssociationOutputLocation", "documentation":"

An Amazon S3 bucket where you want to store the output details of the request.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

Specify a descriptive name for the association.

" } } }, @@ -2439,6 +2988,10 @@ "shape":"MaintenanceWindowName", "documentation":"

The name of the Maintenance Window.

" }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description for the Maintenance Window. We recommend specifying a description to help you organize your Maintenance Windows.

" + }, "Schedule":{ "shape":"MaintenanceWindowSchedule", "documentation":"

The schedule of the Maintenance Window in the form of a cron or rate expression.

" @@ -2453,7 +3006,7 @@ }, "AllowUnassociatedTargets":{ "shape":"MaintenanceWindowAllowUnassociatedTargets", - "documentation":"

Whether targets must be registered with the Maintenance Window before tasks can be defined for those targets.

" + "documentation":"

Enables a Maintenance Window task to execute on managed instances, even if you have not registered those instances as targets. If enabled, then you must specify the unregistered instances (by instance ID) when you register a task with the Maintenance Window

If you don't enable this option, then you must specify previously-registered targets when you register a task with the Maintenance Window.

" }, "ClientToken":{ "shape":"ClientToken", @@ -2475,6 +3028,10 @@ "type":"structure", "required":["Name"], "members":{ + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

Defines the operating system the patch baseline applies to. The Default value is WINDOWS.

" + }, "Name":{ "shape":"BaselineName", "documentation":"

The name of the patch baseline.

" @@ -2491,6 +3048,10 @@ "shape":"PatchIdList", "documentation":"

A list of explicitly approved patches for the baseline.

" }, + "ApprovedPatchesComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

Defines the compliance level for approved patches. This means that if an approved patch is reported as missing, this is the severity of the compliance violation. Valid compliance severity levels include the following: CRITICAL, HIGH, MEDIUM, LOW, INFORMATIONAL, UNSPECIFIED. The default value is UNSPECIFIED.

" + }, "RejectedPatches":{ "shape":"PatchIdList", "documentation":"

A list of explicitly rejected patches for the baseline.

" @@ -2515,6 +3076,28 @@ } } }, + "CreateResourceDataSyncRequest":{ + "type":"structure", + "required":[ + "SyncName", + "S3Destination" + ], + "members":{ + "SyncName":{ + "shape":"ResourceDataSyncName", + "documentation":"

A name for the configuration.

" + }, + "S3Destination":{ + "shape":"ResourceDataSyncS3Destination", + "documentation":"

Amazon S3 configuration details for the sync.

" + } + } + }, + "CreateResourceDataSyncResult":{ + "type":"structure", + "members":{ + } + }, "CreatedDate":{"type":"timestamp"}, "CustomSchemaCountLimitExceededException":{ "type":"structure", @@ -2660,6 +3243,21 @@ } } }, + "DeleteResourceDataSyncRequest":{ + "type":"structure", + "required":["SyncName"], + "members":{ + "SyncName":{ + "shape":"ResourceDataSyncName", + "documentation":"

The name of the configuration to delete.

" + } + } + }, + "DeleteResourceDataSyncResult":{ + "type":"structure", + "members":{ + } + }, "DeregisterManagedInstanceRequest":{ "type":"structure", "required":["InstanceId"], @@ -2719,6 +3317,11 @@ "WindowTargetId":{ "shape":"MaintenanceWindowTargetId", "documentation":"

The ID of the target definition to remove.

" + }, + "Safe":{ + "shape":"Boolean", + "documentation":"

The system checks if the target is being referenced by a task. If the target is being referenced, the system returns an error and does not deregister the target from the Maintenance Window.

", + "box":true } } }, @@ -2827,7 +3430,7 @@ "members":{ "Name":{ "shape":"DocumentName", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "InstanceId":{ "shape":"InstanceId", @@ -2836,6 +3439,10 @@ "AssociationId":{ "shape":"AssociationId", "documentation":"

The association ID for which you want information.

" + }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

Specify the association version to retrieve. To view the latest version, either specify $LATEST for this parameter, or omit this parameter. To view a list of all associations for an instance, use ListInstanceAssociations. To get a list of versions for a specific association, use ListAssociationVersions.

" } } }, @@ -2942,7 +3549,7 @@ "members":{ "Name":{ "shape":"DocumentARN", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "DocumentVersion":{ "shape":"DocumentVersion", @@ -2955,7 +3562,7 @@ "members":{ "Document":{ "shape":"DocumentDescription", - "documentation":"

Information about the SSM document.

" + "documentation":"

Information about the Systems Manager document.

" } } }, @@ -3525,6 +4132,10 @@ "documentation":"

The maximum number of patch groups to return (per page).

", "box":true }, + "Filters":{ + "shape":"PatchOrchestratorFilterList", + "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + }, "NextToken":{ "shape":"NextToken", "documentation":"

The token for the next set of items to return. (You received this token from a previous call.)

" @@ -3580,7 +4191,7 @@ "members":{ "Sha1":{ "shape":"DocumentSha1", - "documentation":"

The SHA1 hash of the document, which you can use for verification purposes.

" + "documentation":"

The SHA1 hash of the document, which you can use for verification.

" }, "Hash":{ "shape":"DocumentHash", @@ -3592,11 +4203,11 @@ }, "Name":{ "shape":"DocumentARN", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "Owner":{ "shape":"DocumentOwner", - "documentation":"

The AWS user account of the person who created the document.

" + "documentation":"

The AWS user account that created the document.

" }, "CreatedDate":{ "shape":"DateTime", @@ -3604,7 +4215,7 @@ }, "Status":{ "shape":"DocumentStatus", - "documentation":"

The status of the SSM document.

" + "documentation":"

The status of the Systems Manager document.

" }, "DocumentVersion":{ "shape":"DocumentVersion", @@ -3620,7 +4231,7 @@ }, "PlatformTypes":{ "shape":"PlatformTypeList", - "documentation":"

The list of OS platforms compatible with this SSM document.

" + "documentation":"

The list of OS platforms compatible with this Systems Manager document.

" }, "DocumentType":{ "shape":"DocumentType", @@ -3637,9 +4248,13 @@ "DefaultVersion":{ "shape":"DocumentVersion", "documentation":"

The default version.

" + }, + "Tags":{ + "shape":"TagList", + "documentation":"

The tags, or metadata, that have been applied to the document.

" } }, - "documentation":"

Describes an SSM document.

" + "documentation":"

Describes a Systems Manager document.

" }, "DocumentFilter":{ "type":"structure", @@ -3670,10 +4285,7 @@ }, "DocumentFilterList":{ "type":"list", - "member":{ - "shape":"DocumentFilter", - "locationName":"DocumentFilter" - }, + "member":{"shape":"DocumentFilter"}, "min":1 }, "DocumentFilterValue":{ @@ -3696,11 +4308,11 @@ "members":{ "Name":{ "shape":"DocumentARN", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "Owner":{ "shape":"DocumentOwner", - "documentation":"

The AWS user account of the person who created the document.

" + "documentation":"

The AWS user account that created the document.

" }, "PlatformTypes":{ "shape":"PlatformTypeList", @@ -3717,23 +4329,58 @@ "SchemaVersion":{ "shape":"DocumentSchemaVersion", "documentation":"

The schema version.

" + }, + "Tags":{ + "shape":"TagList", + "documentation":"

The tags, or metadata, that have been applied to the document.

" } }, - "documentation":"

Describes the name of an SSM document.

" + "documentation":"

Describes the name of a Systems Manager document.

" }, "DocumentIdentifierList":{ "type":"list", - "member":{ - "shape":"DocumentIdentifier", - "locationName":"DocumentIdentifier" - } + "member":{"shape":"DocumentIdentifier"} + }, + "DocumentKeyValuesFilter":{ + "type":"structure", + "members":{ + "Key":{ + "shape":"DocumentKeyValuesFilterKey", + "documentation":"

The name of the filter key.

" + }, + "Values":{ + "shape":"DocumentKeyValuesFilterValues", + "documentation":"

The value for the filter key.

" + } + }, + "documentation":"

One or more filters. Use a filter to return a more specific list of documents.

For keys, you can specify one or more tags that have been applied to a document.

Other valid values include Owner, Name, PlatformTypes, and DocumentType.

Note that only one Owner can be specified in a request. For example: Key=Owner,Values=Self.

If you use Name as a key, you can use a name prefix to return a list of documents. For example, in the AWS CLI, to return a list of all documents that begin with Te, run the following command:

aws ssm list-documents --filters Key=Name,Values=Te

If you specify more than two keys, only documents that are identified by all the tags are returned in the results. If you specify more than two values for a key, documents that are identified by any of the values are returned in the results.

To specify a custom key and value pair, use the format Key=tag:[tagName],Values=[valueName].

For example, if you created a Key called region and are using the AWS CLI to call the list-documents command:

aws ssm list-documents --filters Key=tag:region,Values=east,west Key=Owner,Values=Self

" + }, + "DocumentKeyValuesFilterKey":{ + "type":"string", + "max":128, + "min":1 + }, + "DocumentKeyValuesFilterList":{ + "type":"list", + "member":{"shape":"DocumentKeyValuesFilter"}, + "max":5, + "min":0 + }, + "DocumentKeyValuesFilterValue":{ + "type":"string", + "max":256, + "min":1 + }, + "DocumentKeyValuesFilterValues":{ + "type":"list", + "member":{"shape":"DocumentKeyValuesFilterValue"} }, "DocumentLimitExceeded":{ "type":"structure", "members":{ "Message":{"shape":"String"} }, - "documentation":"

You can have at most 200 active SSM documents.

", + "documentation":"

You can have at most 200 active Systems Manager documents.

", "exception":true }, "DocumentName":{ @@ -3767,10 +4414,7 @@ "DocumentParameterDescrption":{"type":"string"}, "DocumentParameterList":{ "type":"list", - "member":{ - "shape":"DocumentParameter", - "locationName":"DocumentParameter" - } + "member":{"shape":"DocumentParameter"} }, "DocumentParameterName":{"type":"string"}, "DocumentParameterType":{ @@ -3925,10 +4569,7 @@ }, "FailedCreateAssociationList":{ "type":"list", - "member":{ - "shape":"FailedCreateAssociation", - "locationName":"FailedCreateAssociationEntry" - } + "member":{"shape":"FailedCreateAssociation"} }, "FailureDetails":{ "type":"structure", @@ -3956,6 +4597,14 @@ "Unknown" ] }, + "FeatureNotAvailableException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You attempted to register a LAMBDA or STEP_FUNCTION task in a region where the corresponding service is not available.

", + "exception":true + }, "GetAutomationExecutionRequest":{ "type":"structure", "required":["AutomationExecutionId"], @@ -4064,6 +4713,10 @@ "GetDefaultPatchBaselineRequest":{ "type":"structure", "members":{ + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

Returns the default patch baseline for the specified operating system.

" + } } }, "GetDefaultPatchBaselineResult":{ @@ -4072,6 +4725,10 @@ "BaselineId":{ "shape":"BaselineId", "documentation":"

The ID of the default patch baseline.

" + }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

The operating system for the returned patch baseline.

" } } }, @@ -4106,6 +4763,10 @@ "SnapshotDownloadUrl":{ "shape":"SnapshotDownloadUrl", "documentation":"

A pre-signed Amazon S3 URL that can be used to download the patch snapshot.

" + }, + "Product":{ + "shape":"Product", + "documentation":"

Returns the specific operating system (for example Windows Server 2012 or Amazon Linux 2015.09) on the instance for the specified patch snapshot.

" } } }, @@ -4115,7 +4776,7 @@ "members":{ "Name":{ "shape":"DocumentARN", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "DocumentVersion":{ "shape":"DocumentVersion", @@ -4128,7 +4789,7 @@ "members":{ "Name":{ "shape":"DocumentARN", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "DocumentVersion":{ "shape":"DocumentVersion", @@ -4136,7 +4797,7 @@ }, "Content":{ "shape":"DocumentContent", - "documentation":"

The contents of the SSM document.

" + "documentation":"

The contents of the Systems Manager document.

" }, "DocumentType":{ "shape":"DocumentType", @@ -4151,6 +4812,10 @@ "shape":"InventoryFilterList", "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" }, + "Aggregators":{ + "shape":"InventoryAggregatorList", + "documentation":"

Returns counts of inventory types based on one or more expressions. For example, if you aggregate by using an expression that uses the AWS:InstanceInformation.PlatformType type, you can see a count of how many Windows and Linux instances exist in your inventoried fleet.

" + }, "ResultAttributes":{ "shape":"ResultAttributeList", "documentation":"

The list of inventory item types to return.

" @@ -4199,6 +4864,15 @@ "shape":"GetInventorySchemaMaxResults", "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", "box":true + }, + "Aggregator":{ + "shape":"AggregatorSchemaOnly", + "documentation":"

Returns inventory schemas that support aggregation. For example, this call returns the AWS:InstanceInformation type, because it supports aggregation based on the PlatformName, PlatformType, and PlatformVersion attributes.

" + }, + "SubType":{ + "shape":"IsSubTypeSchema", + "documentation":"

Returns the sub-type schema for a specified inventory type.

", + "box":true } } }, @@ -4254,11 +4928,86 @@ } } }, - "GetMaintenanceWindowExecutionTaskRequest":{ + "GetMaintenanceWindowExecutionTaskInvocationRequest":{ "type":"structure", "required":[ "WindowExecutionId", - "TaskId" + "TaskId", + "InvocationId" + ], + "members":{ + "WindowExecutionId":{ + "shape":"MaintenanceWindowExecutionId", + "documentation":"

The ID of the Maintenance Window execution for which the task is a part.

" + }, + "TaskId":{ + "shape":"MaintenanceWindowExecutionTaskId", + "documentation":"

The ID of the specific task in the Maintenance Window task that should be retrieved.

" + }, + "InvocationId":{ + "shape":"MaintenanceWindowExecutionTaskInvocationId", + "documentation":"

The invocation ID to retrieve.

" + } + } + }, + "GetMaintenanceWindowExecutionTaskInvocationResult":{ + "type":"structure", + "members":{ + "WindowExecutionId":{ + "shape":"MaintenanceWindowExecutionId", + "documentation":"

The Maintenance Window execution ID.

" + }, + "TaskExecutionId":{ + "shape":"MaintenanceWindowExecutionTaskId", + "documentation":"

The task execution ID.

" + }, + "InvocationId":{ + "shape":"MaintenanceWindowExecutionTaskInvocationId", + "documentation":"

The invocation ID.

" + }, + "ExecutionId":{ + "shape":"MaintenanceWindowExecutionTaskExecutionId", + "documentation":"

The execution ID.

" + }, + "TaskType":{ + "shape":"MaintenanceWindowTaskType", + "documentation":"

Retrieves the task type for a Maintenance Window. Task types include the following: LAMBDA, STEP_FUNCTION, AUTOMATION, RUN_COMMAND.

" + }, + "Parameters":{ + "shape":"MaintenanceWindowExecutionTaskInvocationParameters", + "documentation":"

The parameters used at the time that the task executed.

" + }, + "Status":{ + "shape":"MaintenanceWindowExecutionStatus", + "documentation":"

The task status for an invocation.

" + }, + "StatusDetails":{ + "shape":"MaintenanceWindowExecutionStatusDetails", + "documentation":"

The details explaining the status. Details are only available for certain status values.

" + }, + "StartTime":{ + "shape":"DateTime", + "documentation":"

The time that the task started executing on the target.

" + }, + "EndTime":{ + "shape":"DateTime", + "documentation":"

The time that the task finished executing on the target.

" + }, + "OwnerInformation":{ + "shape":"OwnerInformation", + "documentation":"

User-provided value to be included in any CloudWatch events raised while running tasks for these targets in this Maintenance Window.

" + }, + "WindowTargetId":{ + "shape":"MaintenanceWindowTaskTargetId", + "documentation":"

The Maintenance Window target ID.

" + } + } + }, + "GetMaintenanceWindowExecutionTaskRequest":{ + "type":"structure", + "required":[ + "WindowExecutionId", + "TaskId" ], "members":{ "WindowExecutionId":{ @@ -4349,6 +5098,10 @@ "shape":"MaintenanceWindowName", "documentation":"

The name of the Maintenance Window.

" }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

The description of the Maintenance Window.

" + }, "Schedule":{ "shape":"MaintenanceWindowSchedule", "documentation":"

The schedule of the Maintenance Window in the form of a cron or rate expression.

" @@ -4379,6 +5132,84 @@ } } }, + "GetMaintenanceWindowTaskRequest":{ + "type":"structure", + "required":[ + "WindowId", + "WindowTaskId" + ], + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The Maintenance Window ID that includes the task to retrieve.

" + }, + "WindowTaskId":{ + "shape":"MaintenanceWindowTaskId", + "documentation":"

The Maintenance Window task ID to retrieve.

" + } + } + }, + "GetMaintenanceWindowTaskResult":{ + "type":"structure", + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The retrieved Maintenance Window ID.

" + }, + "WindowTaskId":{ + "shape":"MaintenanceWindowTaskId", + "documentation":"

The retrieved Maintenance Window task ID.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The targets where the task should execute.

" + }, + "TaskArn":{ + "shape":"MaintenanceWindowTaskArn", + "documentation":"

The resource that the task used during execution. For RUN_COMMAND and AUTOMATION task types, the TaskArn is the Systems Manager Document name/ARN. For LAMBDA tasks, the value is the function name/ARN. For STEP_FUNCTION tasks, the value is the state machine ARN.

" + }, + "ServiceRoleArn":{ + "shape":"ServiceRole", + "documentation":"

The IAM service role to assume during task execution.

" + }, + "TaskType":{ + "shape":"MaintenanceWindowTaskType", + "documentation":"

The type of task to execute.

" + }, + "TaskParameters":{ + "shape":"MaintenanceWindowTaskParameters", + "documentation":"

The parameters to pass to the task when it executes.

" + }, + "TaskInvocationParameters":{ + "shape":"MaintenanceWindowTaskInvocationParameters", + "documentation":"

The parameters to pass to the task when it executes.

" + }, + "Priority":{ + "shape":"MaintenanceWindowTaskPriority", + "documentation":"

The priority of the task when it executes. The lower the number, the higher the priority. Tasks that have the same priority are scheduled in parallel.

" + }, + "MaxConcurrency":{ + "shape":"MaxConcurrency", + "documentation":"

The maximum number of targets allowed to run this task in parallel.

" + }, + "MaxErrors":{ + "shape":"MaxErrors", + "documentation":"

The maximum number of errors allowed before the task stops being scheduled.

" + }, + "LoggingInfo":{ + "shape":"LoggingInfo", + "documentation":"

The location in Amazon S3 where the task results are logged.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The retrieved task name.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

The retrieved task description.

" + } + } + }, "GetParameterHistoryRequest":{ "type":"structure", "required":["Name"], @@ -4451,7 +5282,7 @@ "members":{ "Path":{ "shape":"PSParameterName", - "documentation":"

The hierarchy for the parameter. Hierarchies start with a forward slash (/) and end with the parameter name. A hierarchy can have a maximum of five levels. Examples: /Environment/Test/DBString003

/Finance/Prod/IAD/OS/WinServ2016/license15

" + "documentation":"

The hierarchy for the parameter. Hierarchies start with a forward slash (/) and end with the parameter name. A hierarchy can have a maximum of five levels. For example: /Finance/Prod/IAD/WinServ2016/license15

" }, "Recursive":{ "shape":"Boolean", @@ -4526,6 +5357,10 @@ "PatchGroup":{ "shape":"PatchGroup", "documentation":"

The name of the patch group whose patch baseline should be retrieved.

" + }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

Returns he operating system rule specified for patch groups using the patch baseline.

" } } }, @@ -4539,6 +5374,10 @@ "PatchGroup":{ "shape":"PatchGroup", "documentation":"

The name of the patch group.

" + }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

The operating system rule specified for patch groups using the patch baseline.

" } } }, @@ -4563,6 +5402,10 @@ "shape":"BaselineName", "documentation":"

The name of the patch baseline.

" }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

Returns the operating system specified for the patch baseline.

" + }, "GlobalFilters":{ "shape":"PatchFilterGroup", "documentation":"

A set of global filters used to exclude patches from the baseline.

" @@ -4575,6 +5418,10 @@ "shape":"PatchIdList", "documentation":"

A list of explicitly approved patches for the baseline.

" }, + "ApprovedPatchesComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

Returns the specified compliance severity level for approved patches in the patch baseline.

" + }, "RejectedPatches":{ "shape":"PatchIdList", "documentation":"

A list of explicitly rejected patches for the baseline.

" @@ -4602,10 +5449,10 @@ "members":{ "message":{ "shape":"String", - "documentation":"

A hierarchy can have a maximum of five levels. For example:

/Finance/Prod/IAD/OS/WinServ2016/license15

For more information, see Develop a Parameter Hierarchy.

" + "documentation":"

A hierarchy can have a maximum of five levels. For example:

/Finance/Prod/IAD/OS/WinServ2016/license15

For more information, see Working with Systems Manager Parameters.

" } }, - "documentation":"

A hierarchy can have a maximum of five levels. For example:

/Finance/Prod/IAD/OS/WinServ2016/license15

For more information, see Develop a Parameter Hierarchy.

", + "documentation":"

A hierarchy can have a maximum of five levels. For example:

/Finance/Prod/IAD/OS/WinServ2016/license15

For more information, see Working with Systems Manager Parameters.

", "exception":true }, "HierarchyTypeMismatchException":{ @@ -4628,6 +5475,12 @@ "type":"string", "max":64 }, + "IdempotencyToken":{ + "type":"string", + "max":36, + "min":36, + "pattern":"[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}" + }, "IdempotentParameterMismatch":{ "type":"structure", "members":{ @@ -4664,6 +5517,10 @@ "Content":{ "shape":"DocumentContent", "documentation":"

The content of the association document for the instance(s).

" + }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

Version information for the association on the instance.

" } }, "documentation":"

One or more association documents on the instance.

" @@ -4717,6 +5574,10 @@ "shape":"DocumentVersion", "documentation":"

The association document verions.

" }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

The version of the association applied to the instance.

" + }, "InstanceId":{ "shape":"InstanceId", "documentation":"

The instance ID where the association was created.

" @@ -4744,6 +5605,10 @@ "OutputUrl":{ "shape":"InstanceAssociationOutputUrl", "documentation":"

A URL for an Amazon S3 bucket where you want to store the results of this request.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

The name of the association applied to the instance.

" } }, "documentation":"

Status information about the instance association.

" @@ -4881,10 +5746,7 @@ }, "InstanceInformationFilterList":{ "type":"list", - "member":{ - "shape":"InstanceInformationFilter", - "locationName":"InstanceInformationFilter" - }, + "member":{"shape":"InstanceInformationFilter"}, "min":0 }, "InstanceInformationFilterValue":{ @@ -4893,19 +5755,13 @@ }, "InstanceInformationFilterValueSet":{ "type":"list", - "member":{ - "shape":"InstanceInformationFilterValue", - "locationName":"InstanceInformationFilterValue" - }, + "member":{"shape":"InstanceInformationFilterValue"}, "max":100, "min":1 }, "InstanceInformationList":{ "type":"list", - "member":{ - "shape":"InstanceInformation", - "locationName":"InstanceInformation" - } + "member":{"shape":"InstanceInformation"} }, "InstanceInformationStringFilter":{ "type":"structure", @@ -4931,10 +5787,7 @@ }, "InstanceInformationStringFilterList":{ "type":"list", - "member":{ - "shape":"InstanceInformationStringFilter", - "locationName":"InstanceInformationStringFilter" - }, + "member":{"shape":"InstanceInformationStringFilter"}, "min":0 }, "InstancePatchState":{ @@ -4989,11 +5842,11 @@ "documentation":"

The number of patches from the patch baseline that aren't applicable for the instance and hence aren't installed on the instance.

" }, "OperationStartTime":{ - "shape":"PatchOperationStartTime", + "shape":"DateTime", "documentation":"

The time the most recent patching operation was started on the instance.

" }, "OperationEndTime":{ - "shape":"PatchOperationEndTime", + "shape":"DateTime", "documentation":"

The time the most recent patching operation completed on the instance.

" }, "Operation":{ @@ -5103,6 +5956,14 @@ "documentation":"

The request does not meet the regular expression requirement.

", "exception":true }, + "InvalidAssociationVersion":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

The version you specified is not valid. Use ListAssociationVersions to view all versions of an association according to the association ID. Or, use the $LATEST parameter to view the latest version of the association.

", + "exception":true + }, "InvalidAutomationExecutionParametersException":{ "type":"structure", "members":{ @@ -5111,6 +5972,14 @@ "documentation":"

The supplied parameters for invoking the specified Automation document are incorrect. For example, they may not match the set of parameters permitted for the specified Automation document.

", "exception":true }, + "InvalidAutomationSignalException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

The signal is not valid for the current Automation execution.

", + "exception":true + }, "InvalidCommandId":{ "type":"structure", "members":{ @@ -5213,6 +6082,14 @@ "documentation":"

The specified filter value is not valid.

", "exception":true }, + "InvalidInventoryItemContextException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You specified invalid keys or values in the Context attribute for InventoryItem. Verify the keys and values, and try again.

", + "exception":true + }, "InvalidItemContentException":{ "type":"structure", "members":{ @@ -5265,7 +6142,7 @@ "members":{ "Message":{"shape":"String"} }, - "documentation":"

You must specify values for all required parameters in the SSM document. You can only supply values to parameters defined in the SSM document.

", + "documentation":"

You must specify values for all required parameters in the Systems Manager document. You can only supply values to parameters defined in the Systems Manager document.

", "exception":true }, "InvalidPermissionType":{ @@ -5294,7 +6171,7 @@ "type":"structure", "members":{ }, - "documentation":"

The resource type is not valid. If you are attempting to tag an instance, the instance must be a registered, managed instance.

", + "documentation":"

The resource type is not valid. For example, if you are attempting to tag an instance, the instance must be a registered, managed instance.

", "exception":true }, "InvalidResultAttributeException":{ @@ -5345,6 +6222,31 @@ "documentation":"

The update is not valid.

", "exception":true }, + "InventoryAggregator":{ + "type":"structure", + "members":{ + "Expression":{ + "shape":"InventoryAggregatorExpression", + "documentation":"

The inventory type and attribute name for aggregation.

" + }, + "Aggregators":{ + "shape":"InventoryAggregatorList", + "documentation":"

Nested aggregators to further refine aggregation for an inventory type.

" + } + }, + "documentation":"

Specifies the inventory type and attribute for the aggregation execution.

" + }, + "InventoryAggregatorExpression":{ + "type":"string", + "max":1000, + "min":1 + }, + "InventoryAggregatorList":{ + "type":"list", + "member":{"shape":"InventoryAggregator"}, + "max":10, + "min":1 + }, "InventoryAttributeDataType":{ "type":"string", "enum":[ @@ -5381,20 +6283,14 @@ }, "InventoryFilterList":{ "type":"list", - "member":{ - "shape":"InventoryFilter", - "locationName":"InventoryFilter" - }, + "member":{"shape":"InventoryFilter"}, "max":5, "min":1 }, "InventoryFilterValue":{"type":"string"}, "InventoryFilterValueList":{ "type":"list", - "member":{ - "shape":"InventoryFilterValue", - "locationName":"FilterValue" - }, + "member":{"shape":"InventoryFilterValue"}, "max":20, "min":1 }, @@ -5425,6 +6321,10 @@ "Content":{ "shape":"InventoryItemEntryList", "documentation":"

The inventory data of the inventory type.

" + }, + "Context":{ + "shape":"InventoryItemContentContext", + "documentation":"

A map of associated properties for a specified inventory type. For example, with this attribute, you can specify the ExecutionId, ExecutionType, ComplianceType properties of the AWS:ComplianceItem type.

" } }, "documentation":"

Information collected from managed instances based on your inventory policy document

" @@ -5449,10 +6349,7 @@ }, "InventoryItemAttributeList":{ "type":"list", - "member":{ - "shape":"InventoryItemAttribute", - "locationName":"Attribute" - }, + "member":{"shape":"InventoryItemAttribute"}, "max":50, "min":1 }, @@ -5461,6 +6358,13 @@ "type":"string", "pattern":"^(20)[0-9][0-9]-(0[1-9]|1[012])-([12][0-9]|3[01]|0[1-9])(T)(2[0-3]|[0-1][0-9])(:[0-5][0-9])(:[0-5][0-9])(Z)$" }, + "InventoryItemContentContext":{ + "type":"map", + "key":{"shape":"AttributeName"}, + "value":{"shape":"AttributeValue"}, + "max":50, + "min":0 + }, "InventoryItemContentHash":{ "type":"string", "max":256 @@ -5480,10 +6384,7 @@ }, "InventoryItemList":{ "type":"list", - "member":{ - "shape":"InventoryItem", - "locationName":"Item" - }, + "member":{"shape":"InventoryItem"}, "max":30, "min":1 }, @@ -5505,6 +6406,10 @@ "Attributes":{ "shape":"InventoryItemAttributeList", "documentation":"

The schema attributes for inventory. This contains data type and attribute name.

" + }, + "DisplayName":{ + "shape":"InventoryTypeDisplayName", + "documentation":"

The alias name of the inventory type. The alias name is used for display purposes.

" } }, "documentation":"

The inventory item schema definition. Users can use this to compose inventory query filters.

" @@ -5555,10 +6460,7 @@ "InventoryResultEntityId":{"type":"string"}, "InventoryResultEntityList":{ "type":"list", - "member":{ - "shape":"InventoryResultEntity", - "locationName":"Entity" - } + "member":{"shape":"InventoryResultEntity"} }, "InventoryResultItem":{ "type":"structure", @@ -5597,6 +6499,7 @@ "key":{"shape":"InventoryResultItemKey"}, "value":{"shape":"InventoryResultItem"} }, + "InventoryTypeDisplayName":{"type":"string"}, "InvocationDoesNotExist":{ "type":"structure", "members":{ @@ -5608,6 +6511,7 @@ "type":"string", "max":2500 }, + "IsSubTypeSchema":{"type":"boolean"}, "ItemContentMismatchException":{ "type":"structure", "members":{ @@ -5630,6 +6534,48 @@ "type":"list", "member":{"shape":"TagKey"} }, + "LastResourceDataSyncStatus":{ + "type":"string", + "enum":[ + "Successful", + "Failed", + "InProgress" + ] + }, + "LastResourceDataSyncTime":{"type":"timestamp"}, + "LastSuccessfulResourceDataSyncTime":{"type":"timestamp"}, + "ListAssociationVersionsRequest":{ + "type":"structure", + "required":["AssociationId"], + "members":{ + "AssociationId":{ + "shape":"AssociationId", + "documentation":"

The association ID for which you want to view all versions.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "box":true + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

A token to start the list. Use this token to get the next set of results.

" + } + } + }, + "ListAssociationVersionsResult":{ + "type":"structure", + "members":{ + "AssociationVersions":{ + "shape":"AssociationVersionList", + "documentation":"

Information about all versions of the association for the specified association ID.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token for the next set of items to return. Use this token to get the next set of results.

" + } + } + }, "ListAssociationsRequest":{ "type":"structure", "members":{ @@ -5743,73 +6689,147 @@ } } }, - "ListDocumentVersionsRequest":{ + "ListComplianceItemsRequest":{ "type":"structure", - "required":["Name"], "members":{ - "Name":{ - "shape":"DocumentName", - "documentation":"

The name of the document about which you want version information.

" + "Filters":{ + "shape":"ComplianceStringFilterList", + "documentation":"

One or more compliance filters. Use a filter to return a more specific list of results.

" + }, + "ResourceIds":{ + "shape":"ComplianceResourceIdList", + "documentation":"

The ID for the resources from which to get compliance information. Currently, you can only specify one resource ID.

" + }, + "ResourceTypes":{ + "shape":"ComplianceResourceTypeList", + "documentation":"

The type of resource from which to get compliance information. Currently, the only supported resource type is ManagedInstance.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

A token to start the list. Use this token to get the next set of results.

" }, "MaxResults":{ "shape":"MaxResults", "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", "box":true - }, - "NextToken":{ - "shape":"NextToken", - "documentation":"

The token for the next set of items to return. (You received this token from a previous call.)

" } } }, - "ListDocumentVersionsResult":{ + "ListComplianceItemsResult":{ "type":"structure", "members":{ - "DocumentVersions":{ - "shape":"DocumentVersionList", - "documentation":"

The document versions.

" + "ComplianceItems":{ + "shape":"ComplianceItemList", + "documentation":"

A list of compliance information for the specified resource ID.

" }, "NextToken":{ "shape":"NextToken", - "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

" + "documentation":"

The token for the next set of items to return. Use this token to get the next set of results.

" } } }, - "ListDocumentsRequest":{ + "ListComplianceSummariesRequest":{ "type":"structure", "members":{ - "DocumentFilterList":{ - "shape":"DocumentFilterList", - "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + "Filters":{ + "shape":"ComplianceStringFilterList", + "documentation":"

One or more compliance or inventory filters. Use a filter to return a more specific list of results.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

A token to start the list. Use this token to get the next set of results.

" }, "MaxResults":{ "shape":"MaxResults", - "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "documentation":"

The maximum number of items to return for this call. Currently, you can specify null or 50. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", "box":true - }, - "NextToken":{ - "shape":"NextToken", - "documentation":"

The token for the next set of items to return. (You received this token from a previous call.)

" } } }, - "ListDocumentsResult":{ + "ListComplianceSummariesResult":{ "type":"structure", "members":{ - "DocumentIdentifiers":{ - "shape":"DocumentIdentifierList", - "documentation":"

The names of the SSM documents.

" + "ComplianceSummaryItems":{ + "shape":"ComplianceSummaryItemList", + "documentation":"

A list of compliant and non-compliant summary counts based on compliance types. For example, this call returns State Manager associations, patches, or custom compliance types according to the filter criteria that you specified.

" }, "NextToken":{ "shape":"NextToken", - "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

" + "documentation":"

The token for the next set of items to return. Use this token to get the next set of results.

" } } }, - "ListInventoryEntriesRequest":{ + "ListDocumentVersionsRequest":{ "type":"structure", - "required":[ - "InstanceId", + "required":["Name"], + "members":{ + "Name":{ + "shape":"DocumentName", + "documentation":"

The name of the document about which you want version information.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "box":true + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token for the next set of items to return. (You received this token from a previous call.)

" + } + } + }, + "ListDocumentVersionsResult":{ + "type":"structure", + "members":{ + "DocumentVersions":{ + "shape":"DocumentVersionList", + "documentation":"

The document versions.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

" + } + } + }, + "ListDocumentsRequest":{ + "type":"structure", + "members":{ + "DocumentFilterList":{ + "shape":"DocumentFilterList", + "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + }, + "Filters":{ + "shape":"DocumentKeyValuesFilterList", + "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "box":true + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token for the next set of items to return. (You received this token from a previous call.)

" + } + } + }, + "ListDocumentsResult":{ + "type":"structure", + "members":{ + "DocumentIdentifiers":{ + "shape":"DocumentIdentifierList", + "documentation":"

The names of the Systems Manager documents.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token to use when requesting the next set of items. If there are no additional items to return, the string is empty.

" + } + } + }, + "ListInventoryEntriesRequest":{ + "type":"structure", + "required":[ + "InstanceId", "TypeName" ], "members":{ @@ -5865,6 +6885,64 @@ } } }, + "ListResourceComplianceSummariesRequest":{ + "type":"structure", + "members":{ + "Filters":{ + "shape":"ComplianceStringFilterList", + "documentation":"

One or more filters. Use a filter to return a more specific list of results.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

A token to start the list. Use this token to get the next set of results.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "box":true + } + } + }, + "ListResourceComplianceSummariesResult":{ + "type":"structure", + "members":{ + "ResourceComplianceSummaryItems":{ + "shape":"ResourceComplianceSummaryItemList", + "documentation":"

A summary count for specified or targeted managed instances. Summary count includes information about compliant and non-compliant State Manager associations, patch status, or custom items according to the filter criteria that you specify.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token for the next set of items to return. Use this token to get the next set of results.

" + } + } + }, + "ListResourceDataSyncRequest":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"NextToken", + "documentation":"

A token to start the list. Use this token to get the next set of results.

" + }, + "MaxResults":{ + "shape":"MaxResults", + "documentation":"

The maximum number of items to return for this call. The call also returns a token that you can specify in a subsequent call to get the next set of results.

", + "box":true + } + } + }, + "ListResourceDataSyncResult":{ + "type":"structure", + "members":{ + "ResourceDataSyncItems":{ + "shape":"ResourceDataSyncItemList", + "documentation":"

A list of your current Resource Data Sync configurations and their statuses.

" + }, + "NextToken":{ + "shape":"NextToken", + "documentation":"

The token for the next set of items to return. Use this token to get the next set of results.

" + } + } + }, "ListTagsForResourceRequest":{ "type":"structure", "required":[ @@ -5914,11 +6992,31 @@ "documentation":"

Information about an Amazon S3 bucket to write instance-level logs to.

" }, "MaintenanceWindowAllowUnassociatedTargets":{"type":"boolean"}, + "MaintenanceWindowAutomationParameters":{ + "type":"structure", + "members":{ + "DocumentVersion":{ + "shape":"DocumentVersion", + "documentation":"

The version of an Automation document to use during task execution.

" + }, + "Parameters":{ + "shape":"AutomationParameterMap", + "documentation":"

The parameters for the AUTOMATION task.

" + } + }, + "documentation":"

The parameters for an AUTOMATION task type.

" + }, "MaintenanceWindowCutoff":{ "type":"integer", "max":23, "min":0 }, + "MaintenanceWindowDescription":{ + "type":"string", + "max":128, + "min":1, + "sensitive":true + }, "MaintenanceWindowDurationHours":{ "type":"integer", "max":24, @@ -6061,6 +7159,10 @@ "shape":"MaintenanceWindowExecutionTaskExecutionId", "documentation":"

The ID of the action performed in the service that actually handled the task invocation. If the task type is RUN_COMMAND, this value is the command ID.

" }, + "TaskType":{ + "shape":"MaintenanceWindowTaskType", + "documentation":"

The task type.

" + }, "Parameters":{ "shape":"MaintenanceWindowExecutionTaskInvocationParameters", "documentation":"

The parameters that were provided for the invocation when it was executed.

" @@ -6151,6 +7253,10 @@ "shape":"MaintenanceWindowName", "documentation":"

The name of the Maintenance Window.

" }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

A description of the Maintenance Window.

" + }, "Enabled":{ "shape":"MaintenanceWindowEnabled", "documentation":"

Whether the Maintenance Window is enabled.

" @@ -6170,6 +7276,39 @@ "type":"list", "member":{"shape":"MaintenanceWindowIdentity"} }, + "MaintenanceWindowLambdaClientContext":{ + "type":"string", + "max":8000, + "min":1 + }, + "MaintenanceWindowLambdaParameters":{ + "type":"structure", + "members":{ + "ClientContext":{ + "shape":"MaintenanceWindowLambdaClientContext", + "documentation":"

Pass client-specific information to the Lambda function that you are invoking. You can then process the client information in your Lambda function as you choose through the context variable.

" + }, + "Qualifier":{ + "shape":"MaintenanceWindowLambdaQualifier", + "documentation":"

(Optional) Specify a Lambda function version or alias name. If you specify a function version, the action uses the qualified function ARN to invoke a specific Lambda function. If you specify an alias name, the action uses the alias ARN to invoke the Lambda function version to which the alias points.

" + }, + "Payload":{ + "shape":"MaintenanceWindowLambdaPayload", + "documentation":"

JSON to provide to your Lambda function as input.

" + } + }, + "documentation":"

The parameters for a LAMBDA task type.

" + }, + "MaintenanceWindowLambdaPayload":{ + "type":"blob", + "max":4096, + "sensitive":true + }, + "MaintenanceWindowLambdaQualifier":{ + "type":"string", + "max":128, + "min":1 + }, "MaintenanceWindowMaxResults":{ "type":"integer", "max":100, @@ -6185,11 +7324,78 @@ "type":"string", "enum":["INSTANCE"] }, + "MaintenanceWindowRunCommandParameters":{ + "type":"structure", + "members":{ + "Comment":{ + "shape":"Comment", + "documentation":"

Information about the command(s) to execute.

" + }, + "DocumentHash":{ + "shape":"DocumentHash", + "documentation":"

The SHA-256 or SHA-1 hash created by the system when the document was created. SHA-1 hashes have been deprecated.

" + }, + "DocumentHashType":{ + "shape":"DocumentHashType", + "documentation":"

SHA-256 or SHA-1. SHA-1 hashes have been deprecated.

" + }, + "NotificationConfig":{ + "shape":"NotificationConfig", + "documentation":"

Configurations for sending notifications about command status changes on a per-instance basis.

" + }, + "OutputS3BucketName":{ + "shape":"S3BucketName", + "documentation":"

The name of the Amazon S3 bucket.

" + }, + "OutputS3KeyPrefix":{ + "shape":"S3KeyPrefix", + "documentation":"

The Amazon S3 bucket subfolder.

" + }, + "Parameters":{ + "shape":"Parameters", + "documentation":"

The parameters for the RUN_COMMAND task execution.

" + }, + "ServiceRoleArn":{ + "shape":"ServiceRole", + "documentation":"

The IAM service role to assume during task execution.

" + }, + "TimeoutSeconds":{ + "shape":"TimeoutSeconds", + "documentation":"

If this time is reached and the command has not already started executing, it doesn not execute.

", + "box":true + } + }, + "documentation":"

The parameters for a RUN_COMMAND task type.

" + }, "MaintenanceWindowSchedule":{ "type":"string", "max":256, "min":1 }, + "MaintenanceWindowStepFunctionsInput":{ + "type":"string", + "max":4096, + "sensitive":true + }, + "MaintenanceWindowStepFunctionsName":{ + "type":"string", + "max":80, + "min":1 + }, + "MaintenanceWindowStepFunctionsParameters":{ + "type":"structure", + "members":{ + "Input":{ + "shape":"MaintenanceWindowStepFunctionsInput", + "documentation":"

The inputs for the STEP_FUNCTION task.

" + }, + "Name":{ + "shape":"MaintenanceWindowStepFunctionsName", + "documentation":"

The name of the STEP_FUNCTION task.

" + } + }, + "documentation":"

The parameters for the STEP_FUNCTION execution.

" + }, "MaintenanceWindowTarget":{ "type":"structure", "members":{ @@ -6212,6 +7418,14 @@ "OwnerInformation":{ "shape":"OwnerInformation", "documentation":"

User-provided value that will be included in any CloudWatch events raised while running tasks for these targets in this Maintenance Window.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The target name.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

A description of the target.

" } }, "documentation":"

The target registered with the Maintenance Window.

" @@ -6239,11 +7453,11 @@ }, "TaskArn":{ "shape":"MaintenanceWindowTaskArn", - "documentation":"

The ARN of the task to execute.

" + "documentation":"

The resource that the task uses during execution. For RUN_COMMAND and AUTOMATION task types, TaskArn is the Systems Manager document name or ARN. For LAMBDA tasks, it's the function name or ARN. For STEP_FUNCTION tasks, it's the state machine ARN.

" }, "Type":{ "shape":"MaintenanceWindowTaskType", - "documentation":"

The type of task.

" + "documentation":"

The type of task. The type can be one of the following: RUN_COMMAND, AUTOMATION, LAMBDA, or STEP_FUNCTION.

" }, "Targets":{ "shape":"Targets", @@ -6255,7 +7469,7 @@ }, "Priority":{ "shape":"MaintenanceWindowTaskPriority", - "documentation":"

The priority of the task in the Maintenance Window, the lower the number the higher the priority. Tasks in a Maintenance Window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

" + "documentation":"

The priority of the task in the Maintenance Window. The lower the number, the higher the priority. Tasks that have the same priority are scheduled in parallel.

" }, "LoggingInfo":{ "shape":"LoggingInfo", @@ -6272,6 +7486,14 @@ "MaxErrors":{ "shape":"MaxErrors", "documentation":"

The maximum number of errors allowed before this task stops being scheduled.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The task name.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

A description of the task.

" } }, "documentation":"

Information about a task defined for a Maintenance Window.

" @@ -6287,6 +7509,28 @@ "min":36, "pattern":"^[0-9a-fA-F]{8}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{4}\\-[0-9a-fA-F]{12}$" }, + "MaintenanceWindowTaskInvocationParameters":{ + "type":"structure", + "members":{ + "RunCommand":{ + "shape":"MaintenanceWindowRunCommandParameters", + "documentation":"

The parameters for a RUN_COMMAND task type.

" + }, + "Automation":{ + "shape":"MaintenanceWindowAutomationParameters", + "documentation":"

The parameters for a AUTOMATION task type.

" + }, + "StepFunctions":{ + "shape":"MaintenanceWindowStepFunctionsParameters", + "documentation":"

The parameters for a STEP_FUNCTION task type.

" + }, + "Lambda":{ + "shape":"MaintenanceWindowLambdaParameters", + "documentation":"

The parameters for a LAMBDA task type.

" + } + }, + "documentation":"

The parameters for task execution.

" + }, "MaintenanceWindowTaskList":{ "type":"list", "member":{"shape":"MaintenanceWindowTask"} @@ -6339,7 +7583,12 @@ }, "MaintenanceWindowTaskType":{ "type":"string", - "enum":["RUN_COMMAND"] + "enum":[ + "RUN_COMMAND", + "AUTOMATION", + "STEP_FUNCTIONS", + "LAMBDA" + ] }, "ManagedInstanceId":{ "type":"string", @@ -6406,6 +7655,20 @@ } }, "NextToken":{"type":"string"}, + "NonCompliantSummary":{ + "type":"structure", + "members":{ + "NonCompliantCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of compliance items that are not compliant.

" + }, + "SeveritySummary":{ + "shape":"SeveritySummary", + "documentation":"

A summary of the non-compliance severity by compliance type

" + } + }, + "documentation":"

A summary of resources that are not compliant. The summary is organized according to resource type.

" + }, "NormalStringMap":{ "type":"map", "key":{"shape":"String"}, @@ -6452,6 +7715,15 @@ "Invocation" ] }, + "OperatingSystem":{ + "type":"string", + "enum":[ + "WINDOWS", + "AMAZON_LINUX", + "UBUNTU", + "REDHAT_ENTERPRISE_LINUX" + ] + }, "OwnerInformation":{ "type":"string", "max":128, @@ -6460,7 +7732,7 @@ }, "PSParameterName":{ "type":"string", - "max":1024, + "max":2048, "min":1 }, "PSParameterValue":{ @@ -6468,6 +7740,7 @@ "max":4096, "min":1 }, + "PSParameterVersion":{"type":"long"}, "Parameter":{ "type":"structure", "members":{ @@ -6482,6 +7755,10 @@ "Value":{ "shape":"PSParameterValue", "documentation":"

The parameter value.

" + }, + "Version":{ + "shape":"PSParameterVersion", + "documentation":"

The parameter version.

" } }, "documentation":"

An Amazon EC2 Systems Manager parameter in Parameter Store.

" @@ -6497,7 +7774,7 @@ "ParameterDescription":{ "type":"string", "max":1024, - "min":1 + "min":0 }, "ParameterHistory":{ "type":"structure", @@ -6533,6 +7810,10 @@ "AllowedPattern":{ "shape":"AllowedPattern", "documentation":"

Parameter names can include the following letters and symbols.

a-zA-Z0-9_.-

" + }, + "Version":{ + "shape":"PSParameterVersion", + "documentation":"

The parameter version.

" } }, "documentation":"

Information about parameter usage.

" @@ -6559,6 +7840,14 @@ "type":"list", "member":{"shape":"Parameter"} }, + "ParameterMaxVersionLimitExceeded":{ + "type":"structure", + "members":{ + "message":{"shape":"String"} + }, + "documentation":"

The parameter exceeded the maximum number of allowed versions.

", + "exception":true + }, "ParameterMetadata":{ "type":"structure", "members":{ @@ -6589,6 +7878,10 @@ "AllowedPattern":{ "shape":"AllowedPattern", "documentation":"

A parameter name can include only the following letters and symbols.

a-zA-Z0-9_.-

" + }, + "Version":{ + "shape":"PSParameterVersion", + "documentation":"

The parameter version.

" } }, "documentation":"

Metada includes information like the ARN of the last user and the date/time the parameter was last used.

" @@ -6681,6 +7974,14 @@ "type":"list", "member":{"shape":"ParameterValue"} }, + "ParameterVersionNotFound":{ + "type":"structure", + "members":{ + "message":{"shape":"String"} + }, + "documentation":"

The specified parameter version was not found. Verify the parameter name and version, and try again.

", + "exception":true + }, "Parameters":{ "type":"map", "key":{"shape":"ParameterName"}, @@ -6796,13 +8097,17 @@ "shape":"BaselineName", "documentation":"

The name of the patch baseline.

" }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

Defines the operating system the patch baseline applies to. The Default value is WINDOWS.

" + }, "BaselineDescription":{ "shape":"BaselineDescription", "documentation":"

The description of the patch baseline.

" }, "DefaultBaseline":{ "shape":"DefaultBaseline", - "documentation":"

Whether this is the default baseline.

" + "documentation":"

Whether this is the default baseline. Note that Systems Manager supports creating multiple default patch baselines. For example, you can create a default patch baseline for each operating system.

" } }, "documentation":"

Defines the basic information about a patch baseline.

" @@ -6834,7 +8139,7 @@ }, "KBId":{ "shape":"PatchKbNumber", - "documentation":"

The Microsoft Knowledge Base ID of the patch.

" + "documentation":"

The operating system-specific ID of the patch.

" }, "Classification":{ "shape":"PatchClassification", @@ -6849,8 +8154,8 @@ "documentation":"

The state of the patch on the instance (INSTALLED, INSTALLED_OTHER, MISSING, NOT_APPLICABLE or FAILED).

" }, "InstalledTime":{ - "shape":"PatchInstalledTime", - "documentation":"

The date/time the patch was installed on the instance.

" + "shape":"DateTime", + "documentation":"

The date/time the patch was installed on the instance. Note that not all operating systems provide this level of information.

" } }, "documentation":"

Information about the state of a patch on a particular instance as it relates to the patch baseline used to patch the instance.

" @@ -6869,9 +8174,20 @@ "FAILED" ] }, - "PatchComplianceMaxResults":{ - "type":"integer", - "max":100, + "PatchComplianceLevel":{ + "type":"string", + "enum":[ + "CRITICAL", + "HIGH", + "MEDIUM", + "LOW", + "INFORMATIONAL", + "UNSPECIFIED" + ] + }, + "PatchComplianceMaxResults":{ + "type":"integer", + "max":100, "min":10 }, "PatchContentUrl":{"type":"string"}, @@ -6921,7 +8237,10 @@ "PRODUCT", "CLASSIFICATION", "MSRC_SEVERITY", - "PATCH_ID" + "PATCH_ID", + "SECTION", + "PRIORITY", + "SEVERITY" ] }, "PatchFilterList":{ @@ -6971,7 +8290,8 @@ }, "PatchId":{ "type":"string", - "pattern":"(^KB[0-9]{1,7}$)|(^MS[0-9]{2}\\-[0-9]{3}$)" + "max":100, + "min":1 }, "PatchIdList":{ "type":"list", @@ -6981,7 +8301,6 @@ }, "PatchInstalledCount":{"type":"integer"}, "PatchInstalledOtherCount":{"type":"integer"}, - "PatchInstalledTime":{"type":"timestamp"}, "PatchKbNumber":{"type":"string"}, "PatchLanguage":{"type":"string"}, "PatchList":{ @@ -6992,8 +8311,6 @@ "PatchMsrcNumber":{"type":"string"}, "PatchMsrcSeverity":{"type":"string"}, "PatchNotApplicableCount":{"type":"integer"}, - "PatchOperationEndTime":{"type":"timestamp"}, - "PatchOperationStartTime":{"type":"timestamp"}, "PatchOperationType":{ "type":"string", "enum":[ @@ -7048,6 +8365,10 @@ "shape":"PatchFilterGroup", "documentation":"

The patch filter group that defines the criteria for the rule.

" }, + "ComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

A compliance severity level for all approved patches in a patch baseline. Valid compliance severity levels include the following: Unspecified, Critical, High, Medium, Low, and Informational.

" + }, "ApproveAfterDays":{ "shape":"ApproveAfterDays", "documentation":"

The number of days after the release date of each patch matched by the rule the patch is marked as approved in the patch baseline.

", @@ -7081,6 +8402,10 @@ "shape":"PatchDeploymentStatus", "documentation":"

The approval status of a patch (APPROVED, PENDING_APPROVAL, EXPLICIT_APPROVED, EXPLICIT_REJECTED).

" }, + "ComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

The compliance severity level for a patch.

" + }, "ApprovalDate":{ "shape":"DateTime", "documentation":"

The date the patch was approved (or will be approved if the status is PENDING_APPROVAL).

" @@ -7107,9 +8432,48 @@ }, "PlatformTypeList":{ "type":"list", - "member":{ - "shape":"PlatformType", - "locationName":"PlatformType" + "member":{"shape":"PlatformType"} + }, + "Product":{"type":"string"}, + "PutComplianceItemsRequest":{ + "type":"structure", + "required":[ + "ResourceId", + "ResourceType", + "ComplianceType", + "ExecutionSummary", + "Items" + ], + "members":{ + "ResourceId":{ + "shape":"ComplianceResourceId", + "documentation":"

Specify an ID for this resource. For a managed instance, this is the instance ID.

" + }, + "ResourceType":{ + "shape":"ComplianceResourceType", + "documentation":"

Specify the type of resource. ManagedInstance is currently the only supported resource type.

" + }, + "ComplianceType":{ + "shape":"ComplianceTypeName", + "documentation":"

Specify the compliance type. For example, specify Association (for a State Manager association), Patch, or Custom:string.

" + }, + "ExecutionSummary":{ + "shape":"ComplianceExecutionSummary", + "documentation":"

A summary of the call execution that includes an execution ID, the type of execution (for example, Command), and the date/time of the execution using a datetime object that is saved in the following format: yyyy-MM-dd'T'HH:mm:ss'Z'.

" + }, + "Items":{ + "shape":"ComplianceItemEntryList", + "documentation":"

Information about the compliance as defined by the resource type. For example, for a patch compliance type, Items includes information about the PatchSeverity, Classification, etc.

" + }, + "ItemContentHash":{ + "shape":"ComplianceItemContentHash", + "documentation":"

MD5 or SHA-256 content hash. The content hash is used to determine if existing information should be overwritten or ignored. If the content hashes match, the request to put compliance information is ignored.

" + } + } + }, + "PutComplianceItemsResult":{ + "type":"structure", + "members":{ } }, "PutInventoryRequest":{ @@ -7144,11 +8508,11 @@ "members":{ "Name":{ "shape":"PSParameterName", - "documentation":"

The name of the parameter that you want to add to the system.

" + "documentation":"

The fully qualified name of the parameter that you want to add to the system. The fully qualified name includes the complete hierarchy of the parameter path and name. For example: /Dev/DBServer/MySQL/db-string13

The maximum length constraint listed below includes capacity for additional system attributes that are not part of the name. The maximum length for the fully qualified parameter name is 1011 characters.

" }, "Description":{ "shape":"ParameterDescription", - "documentation":"

Information about the parameter that you want to add to the system

" + "documentation":"

Information about the parameter that you want to add to the system.

" }, "Value":{ "shape":"PSParameterValue", @@ -7176,6 +8540,10 @@ "PutParameterResult":{ "type":"structure", "members":{ + "Version":{ + "shape":"PSParameterVersion", + "documentation":"

The new version number of a parameter. If you edit a parameter value, Parameter Store automatically creates a new version and assigns this new version a unique ID. You can reference a parameter version ID in API actions or in Systems Manager documents (SSM documents). By default, if you don't specify a specific version, the system returns the latest parameter value when a parameter is called.

" + } } }, "RegisterDefaultPatchBaselineRequest":{ @@ -7251,6 +8619,14 @@ "shape":"OwnerInformation", "documentation":"

User-provided value that will be included in any CloudWatch events raised while running tasks for these targets in this Maintenance Window.

" }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

An optional name for the target.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description for the target.

" + }, "ClientToken":{ "shape":"ClientToken", "documentation":"

User-provided idempotency token.

", @@ -7303,6 +8679,10 @@ "shape":"MaintenanceWindowTaskParameters", "documentation":"

The parameters that should be passed to the task when it is executed.

" }, + "TaskInvocationParameters":{ + "shape":"MaintenanceWindowTaskInvocationParameters", + "documentation":"

The parameters that the task should use during execution. Populate only the fields that match the task type. All other fields should be empty.

" + }, "Priority":{ "shape":"MaintenanceWindowTaskPriority", "documentation":"

The priority of the task in the Maintenance Window, the lower the number the higher the priority. Tasks in a Maintenance Window are scheduled in priority order with tasks that have the same priority scheduled in parallel.

", @@ -7320,6 +8700,14 @@ "shape":"LoggingInfo", "documentation":"

A structure containing information about an Amazon S3 bucket to write instance-level logs to.

" }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

An optional name for the task.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description for the task.

" + }, "ClientToken":{ "shape":"ClientToken", "documentation":"

User-provided idempotency token.

", @@ -7373,6 +8761,176 @@ "members":{ } }, + "ResourceComplianceSummaryItem":{ + "type":"structure", + "members":{ + "ComplianceType":{ + "shape":"ComplianceTypeName", + "documentation":"

The compliance type.

" + }, + "ResourceType":{ + "shape":"ComplianceResourceType", + "documentation":"

The resource type.

" + }, + "ResourceId":{ + "shape":"ComplianceResourceId", + "documentation":"

The resource ID.

" + }, + "Status":{ + "shape":"ComplianceStatus", + "documentation":"

The compliance status for the resource.

" + }, + "OverallSeverity":{ + "shape":"ComplianceSeverity", + "documentation":"

The highest severity item found for the resource. The resource is compliant for this item.

" + }, + "ExecutionSummary":{ + "shape":"ComplianceExecutionSummary", + "documentation":"

Information about the execution.

" + }, + "CompliantSummary":{ + "shape":"CompliantSummary", + "documentation":"

A list of items that are compliant for the resource.

" + }, + "NonCompliantSummary":{ + "shape":"NonCompliantSummary", + "documentation":"

A list of items that aren't compliant for the resource.

" + } + }, + "documentation":"

Compliance summary information for a specific resource.

" + }, + "ResourceComplianceSummaryItemList":{ + "type":"list", + "member":{"shape":"ResourceComplianceSummaryItem"} + }, + "ResourceDataSyncAWSKMSKeyARN":{ + "type":"string", + "max":512, + "min":1, + "pattern":"arn:.*" + }, + "ResourceDataSyncAlreadyExistsException":{ + "type":"structure", + "members":{ + "SyncName":{"shape":"ResourceDataSyncName"} + }, + "documentation":"

A sync configuration with the same name already exists.

", + "exception":true + }, + "ResourceDataSyncCountExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You have exceeded the allowed maximum sync configurations.

", + "exception":true + }, + "ResourceDataSyncCreatedTime":{"type":"timestamp"}, + "ResourceDataSyncInvalidConfigurationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

The specified sync configuration is invalid.

", + "exception":true + }, + "ResourceDataSyncItem":{ + "type":"structure", + "members":{ + "SyncName":{ + "shape":"ResourceDataSyncName", + "documentation":"

The name of the Resource Data Sync.

" + }, + "S3Destination":{ + "shape":"ResourceDataSyncS3Destination", + "documentation":"

Configuration information for the target Amazon S3 bucket.

" + }, + "LastSyncTime":{ + "shape":"LastResourceDataSyncTime", + "documentation":"

The last time the configuration attempted to sync (UTC).

" + }, + "LastSuccessfulSyncTime":{ + "shape":"LastSuccessfulResourceDataSyncTime", + "documentation":"

The last time the sync operations returned a status of SUCCESSFUL (UTC).

" + }, + "LastStatus":{ + "shape":"LastResourceDataSyncStatus", + "documentation":"

The status reported by the last sync.

" + }, + "SyncCreatedTime":{ + "shape":"ResourceDataSyncCreatedTime", + "documentation":"

The date and time the configuration was created (UTC).

" + } + }, + "documentation":"

Information about a Resource Data Sync configuration, including its current status and last successful sync.

" + }, + "ResourceDataSyncItemList":{ + "type":"list", + "member":{"shape":"ResourceDataSyncItem"} + }, + "ResourceDataSyncName":{ + "type":"string", + "max":64, + "min":1 + }, + "ResourceDataSyncNotFoundException":{ + "type":"structure", + "members":{ + "SyncName":{"shape":"ResourceDataSyncName"} + }, + "documentation":"

The specified sync name was not found.

", + "exception":true + }, + "ResourceDataSyncS3BucketName":{ + "type":"string", + "max":2048, + "min":1 + }, + "ResourceDataSyncS3Destination":{ + "type":"structure", + "required":[ + "BucketName", + "SyncFormat", + "Region" + ], + "members":{ + "BucketName":{ + "shape":"ResourceDataSyncS3BucketName", + "documentation":"

The name of the Amazon S3 bucket where the aggregated data is stored.

" + }, + "Prefix":{ + "shape":"ResourceDataSyncS3Prefix", + "documentation":"

An Amazon S3 prefix for the bucket.

" + }, + "SyncFormat":{ + "shape":"ResourceDataSyncS3Format", + "documentation":"

A supported sync format. The following format is currently supported: JsonSerDe

" + }, + "Region":{ + "shape":"ResourceDataSyncS3Region", + "documentation":"

The AWS Region with the Amazon S3 bucket targeted by the Resource Data Sync.

" + }, + "AWSKMSKeyARN":{ + "shape":"ResourceDataSyncAWSKMSKeyARN", + "documentation":"

The ARN of an encryption key for a destination in Amazon S3. Must belong to the same region as the destination Amazon S3 bucket.

" + } + }, + "documentation":"

Information about the target Amazon S3 bucket for the Resource Data Sync.

" + }, + "ResourceDataSyncS3Format":{ + "type":"string", + "enum":["JsonSerDe"] + }, + "ResourceDataSyncS3Prefix":{ + "type":"string", + "max":256, + "min":1 + }, + "ResourceDataSyncS3Region":{ + "type":"string", + "max":64, + "min":1 + }, "ResourceId":{"type":"string"}, "ResourceInUseException":{ "type":"structure", @@ -7401,9 +8959,11 @@ "ResourceTypeForTagging":{ "type":"string", "enum":[ + "Document", "ManagedInstance", "MaintenanceWindow", - "Parameter" + "Parameter", + "PatchBaseline" ] }, "ResponseCode":{"type":"integer"}, @@ -7420,10 +8980,7 @@ }, "ResultAttributeList":{ "type":"list", - "member":{ - "shape":"ResultAttribute", - "locationName":"ResultAttribute" - }, + "member":{"shape":"ResultAttribute"}, "max":1, "min":1 }, @@ -7474,6 +9031,32 @@ "max":256, "min":1 }, + "SendAutomationSignalRequest":{ + "type":"structure", + "required":[ + "AutomationExecutionId", + "SignalType" + ], + "members":{ + "AutomationExecutionId":{ + "shape":"AutomationExecutionId", + "documentation":"

The unique identifier for an existing Automation execution that you want to send the signal to.

" + }, + "SignalType":{ + "shape":"SignalType", + "documentation":"

The type of signal. Valid signal types include the following: Approve and Reject

" + }, + "Payload":{ + "shape":"AutomationParameterMap", + "documentation":"

The data sent with the signal. The data schema depends on the type of signal used in the request.

" + } + } + }, + "SendAutomationSignalResult":{ + "type":"structure", + "members":{ + } + }, "SendCommandRequest":{ "type":"structure", "required":["DocumentName"], @@ -7551,6 +9134,43 @@ } }, "ServiceRole":{"type":"string"}, + "SeveritySummary":{ + "type":"structure", + "members":{ + "CriticalCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of critical. Critical severity is determined by the organization that published the compliance items.

" + }, + "HighCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of high. High severity is determined by the organization that published the compliance items.

" + }, + "MediumCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of medium. Medium severity is determined by the organization that published the compliance items.

" + }, + "LowCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of low. Low severity is determined by the organization that published the compliance items.

" + }, + "InformationalCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of informational. Informational severity is determined by the organization that published the compliance items.

" + }, + "UnspecifiedCount":{ + "shape":"ComplianceSummaryCount", + "documentation":"

The total number of resources or compliance items that have a severity level of unspecified. Unspecified severity is determined by the organization that published the compliance items.

" + } + }, + "documentation":"

The number of managed instances found for each patch severity level defined in the request filter.

" + }, + "SignalType":{ + "type":"string", + "enum":[ + "Approve", + "Reject" + ] + }, "SnapshotDownloadUrl":{"type":"string"}, "SnapshotId":{ "type":"string", @@ -7582,6 +9202,10 @@ "Parameters":{ "shape":"AutomationParameterMap", "documentation":"

A key-value map of execution parameters, which match the declared parameters in the Automation document.

" + }, + "ClientToken":{ + "shape":"IdempotencyToken", + "documentation":"

User-provided idempotency token. The token must be unique, is case insensitive, enforces the UUID format, and can't be reused.

" } } }, @@ -7696,6 +9320,14 @@ "type":"list", "member":{"shape":"String"} }, + "SubTypeCountLimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

The sub-type count exceeded the limit for the inventory type.

", + "exception":true + }, "Tag":{ "type":"structure", "required":[ @@ -7712,7 +9344,7 @@ "documentation":"

The value of the tag.

" } }, - "documentation":"

Metadata that you assign to your managed instances. Tags enable you to categorize your managed instances in different ways, for example, by purpose, owner, or environment.

" + "documentation":"

Metadata that you assign to your AWS resources. Tags enable you to categorize your resources in different ways, for example, by purpose, owner, or environment. In Systems Manager, you can apply tags to documents, managed instances, Maintenance Windows, Parameter Store parameters, and patch baselines.

" }, "TagKey":{ "type":"string", @@ -7745,6 +9377,14 @@ "documentation":"

An array of search criteria that targets instances using a Key,Value combination that you specify. Targets is required if you don't provide one or more instance IDs in the call.

" }, "TargetCount":{"type":"integer"}, + "TargetInUseException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

You specified the Safe option for the DeregisterTargetFromMaintenanceWindow operation, but the target is still referenced in a task.

", + "exception":true + }, "TargetKey":{ "type":"string", "max":128, @@ -7792,6 +9432,15 @@ "documentation":"

The size of inventory data has exceeded the total size limit for the resource.

", "exception":true }, + "UnsupportedInventoryItemContextException":{ + "type":"structure", + "members":{ + "TypeName":{"shape":"InventoryItemTypeName"}, + "Message":{"shape":"String"} + }, + "documentation":"

The Context attribute that you specified for the InventoryItem is not allowed for this inventory type. You can only use the Context attribute with inventory types like AWS:ComplianceItem.

", + "exception":true + }, "UnsupportedInventorySchemaVersionException":{ "type":"structure", "members":{ @@ -7800,6 +9449,14 @@ "documentation":"

Inventory item type schema version has to match supported versions in the service. Check output of GetInventorySchema to see the available schema version for each type.

", "exception":true }, + "UnsupportedOperatingSystem":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "documentation":"

The operating systems you specified is not supported, or the operation is not supported for the operating system. Valid operating systems include: Windows, AmazonLinux, RedhatEnterpriseLinux, and Ubuntu.

", + "exception":true + }, "UnsupportedParameterType":{ "type":"structure", "members":{ @@ -7847,6 +9504,14 @@ "Targets":{ "shape":"Targets", "documentation":"

The targets of the association.

" + }, + "AssociationName":{ + "shape":"AssociationName", + "documentation":"

The name of the association that you want to update.

" + }, + "AssociationVersion":{ + "shape":"AssociationVersion", + "documentation":"

This parameter is provided for concurrency control purposes. You must specify the latest association version in the service. If you want to ensure that this request succeeds, either specify $LATEST, or omit this parameter.

" } } }, @@ -7869,7 +9534,7 @@ "members":{ "Name":{ "shape":"DocumentName", - "documentation":"

The name of the SSM document.

" + "documentation":"

The name of the Systems Manager document.

" }, "InstanceId":{ "shape":"InstanceId", @@ -7958,6 +9623,10 @@ "shape":"MaintenanceWindowName", "documentation":"

The name of the Maintenance Window.

" }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description for the update request.

" + }, "Schedule":{ "shape":"MaintenanceWindowSchedule", "documentation":"

The schedule of the Maintenance Window in the form of a cron or rate expression.

" @@ -7981,6 +9650,11 @@ "shape":"MaintenanceWindowEnabled", "documentation":"

Whether the Maintenance Window is enabled.

", "box":true + }, + "Replace":{ + "shape":"Boolean", + "documentation":"

If True, then all fields that are required by the CreateMaintenanceWindow action are also required for this API request. Optional fields that are not specified are set to null.

", + "box":true } } }, @@ -7995,6 +9669,10 @@ "shape":"MaintenanceWindowName", "documentation":"

The name of the Maintenance Window.

" }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description of the update.

" + }, "Schedule":{ "shape":"MaintenanceWindowSchedule", "documentation":"

The schedule of the Maintenance Window in the form of a cron or rate expression.

" @@ -8017,6 +9695,197 @@ } } }, + "UpdateMaintenanceWindowTargetRequest":{ + "type":"structure", + "required":[ + "WindowId", + "WindowTargetId" + ], + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The Maintenance Window ID with which to modify the target.

" + }, + "WindowTargetId":{ + "shape":"MaintenanceWindowTargetId", + "documentation":"

The target ID to modify.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The targets to add or replace.

" + }, + "OwnerInformation":{ + "shape":"OwnerInformation", + "documentation":"

User-provided value that will be included in any CloudWatch events raised while running tasks for these targets in this Maintenance Window.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

A name for the update.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

An optional description for the update.

" + }, + "Replace":{ + "shape":"Boolean", + "documentation":"

If True, then all fields that are required by the RegisterTargetWithMaintenanceWindow action are also required for this API request. Optional fields that are not specified are set to null.

", + "box":true + } + } + }, + "UpdateMaintenanceWindowTargetResult":{ + "type":"structure", + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The Maintenance Window ID specified in the update request.

" + }, + "WindowTargetId":{ + "shape":"MaintenanceWindowTargetId", + "documentation":"

The target ID specified in the update request.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The updated targets.

" + }, + "OwnerInformation":{ + "shape":"OwnerInformation", + "documentation":"

The updated owner.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The updated name.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

The updated description.

" + } + } + }, + "UpdateMaintenanceWindowTaskRequest":{ + "type":"structure", + "required":[ + "WindowId", + "WindowTaskId" + ], + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The Maintenance Window ID that contains the task to modify.

" + }, + "WindowTaskId":{ + "shape":"MaintenanceWindowTaskId", + "documentation":"

The task ID to modify.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The targets (either instances or tags) to modify. Instances are specified using Key=instanceids,Values=instanceID_1,instanceID_2. Tags are specified using Key=tag_name,Values=tag_value.

" + }, + "TaskArn":{ + "shape":"MaintenanceWindowTaskArn", + "documentation":"

The task ARN to modify.

" + }, + "ServiceRoleArn":{ + "shape":"ServiceRole", + "documentation":"

The IAM service role ARN to modify. The system assumes this role during task execution.

" + }, + "TaskParameters":{ + "shape":"MaintenanceWindowTaskParameters", + "documentation":"

The parameters to modify. The map has the following format:

Key: string, between 1 and 255 characters

Value: an array of strings, each string is between 1 and 255 characters

" + }, + "TaskInvocationParameters":{ + "shape":"MaintenanceWindowTaskInvocationParameters", + "documentation":"

The parameters that the task should use during execution. Populate only the fields that match the task type. All other fields should be empty.

" + }, + "Priority":{ + "shape":"MaintenanceWindowTaskPriority", + "documentation":"

The new task priority to specify. The lower the number, the higher the priority. Tasks that have the same priority are scheduled in parallel.

", + "box":true + }, + "MaxConcurrency":{ + "shape":"MaxConcurrency", + "documentation":"

The new MaxConcurrency value you want to specify. MaxConcurrency is the number of targets that are allowed to run this task in parallel.

" + }, + "MaxErrors":{ + "shape":"MaxErrors", + "documentation":"

The new MaxErrors value to specify. MaxErrors is the maximum number of errors that are allowed before the task stops being scheduled.

" + }, + "LoggingInfo":{ + "shape":"LoggingInfo", + "documentation":"

The new logging location in Amazon S3 to specify.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The new task name to specify.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

The new task description to specify.

" + }, + "Replace":{ + "shape":"Boolean", + "documentation":"

If True, then all fields that are required by the RegisterTaskWithMaintenanceWndow action are also required for this API request. Optional fields that are not specified are set to null.

", + "box":true + } + } + }, + "UpdateMaintenanceWindowTaskResult":{ + "type":"structure", + "members":{ + "WindowId":{ + "shape":"MaintenanceWindowId", + "documentation":"

The ID of the Maintenance Window that was updated.

" + }, + "WindowTaskId":{ + "shape":"MaintenanceWindowTaskId", + "documentation":"

The task ID of the Maintenance Window that was updated.

" + }, + "Targets":{ + "shape":"Targets", + "documentation":"

The updated target values.

" + }, + "TaskArn":{ + "shape":"MaintenanceWindowTaskArn", + "documentation":"

The updated task ARN value.

" + }, + "ServiceRoleArn":{ + "shape":"ServiceRole", + "documentation":"

The updated service role ARN value.

" + }, + "TaskParameters":{ + "shape":"MaintenanceWindowTaskParameters", + "documentation":"

The updated parameter values.

" + }, + "TaskInvocationParameters":{ + "shape":"MaintenanceWindowTaskInvocationParameters", + "documentation":"

The updated parameter values.

" + }, + "Priority":{ + "shape":"MaintenanceWindowTaskPriority", + "documentation":"

The updated priority value.

" + }, + "MaxConcurrency":{ + "shape":"MaxConcurrency", + "documentation":"

The updated MaxConcurrency value.

" + }, + "MaxErrors":{ + "shape":"MaxErrors", + "documentation":"

The updated MaxErrors value.

" + }, + "LoggingInfo":{ + "shape":"LoggingInfo", + "documentation":"

The updated logging information in Amazon S3.

" + }, + "Name":{ + "shape":"MaintenanceWindowName", + "documentation":"

The updated task name.

" + }, + "Description":{ + "shape":"MaintenanceWindowDescription", + "documentation":"

The updated task description.

" + } + } + }, "UpdateManagedInstanceRoleRequest":{ "type":"structure", "required":[ @@ -8063,6 +9932,10 @@ "shape":"PatchIdList", "documentation":"

A list of explicitly approved patches for the baseline.

" }, + "ApprovedPatchesComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

Assigns a new compliance severity level to an existing patch baseline.

" + }, "RejectedPatches":{ "shape":"PatchIdList", "documentation":"

A list of explicitly rejected patches for the baseline.

" @@ -8084,6 +9957,10 @@ "shape":"BaselineName", "documentation":"

The name of the patch baseline.

" }, + "OperatingSystem":{ + "shape":"OperatingSystem", + "documentation":"

The operating system rule used by the updated patch baseline.

" + }, "GlobalFilters":{ "shape":"PatchFilterGroup", "documentation":"

A set of global filters used to exclude patches from the baseline.

" @@ -8096,6 +9973,10 @@ "shape":"PatchIdList", "documentation":"

A list of explicitly approved patches for the baseline.

" }, + "ApprovedPatchesComplianceLevel":{ + "shape":"PatchComplianceLevel", + "documentation":"

The compliance severity level assigned to the patch baseline after the update completed.

" + }, "RejectedPatches":{ "shape":"PatchIdList", "documentation":"

A list of explicitly rejected patches for the baseline.

" @@ -8120,5 +10001,5 @@ "pattern":"^[0-9]{1,6}(\\.[0-9]{1,6}){2,3}$" } }, - "documentation":"Amazon EC2 Systems Manager

Amazon EC2 Systems Manager is a collection of capabilities that helps you automate management tasks such as collecting system inventory, applying operating system (OS) patches, automating the creation of Amazon Machine Images (AMIs), and configuring operating systems (OSs) and applications at scale. Systems Manager lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager.

This reference is intended to be used with the Amazon EC2 Systems Manager User Guide.

To get started, verify prerequisites and configure managed instances. For more information, see Systems Manager Prerequisites.

" + "documentation":"Amazon EC2 Systems Manager

Amazon EC2 Systems Manager is a collection of capabilities that helps you automate management tasks such as collecting system inventory, applying operating system (OS) patches, automating the creation of Amazon Machine Images (AMIs), and configuring operating systems (OSs) and applications at scale. Systems Manager lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager.

This reference is intended to be used with the Amazon EC2 Systems Manager User Guide.

To get started, verify prerequisites and configure managed instances. For more information, see Systems Manager Prerequisites.

For information about other API actions you can perform on Amazon EC2 instances, see the Amazon EC2 API Reference. For information about how to use a Query API, see Making API Requests.

" } diff --git a/services/stepfunctions/src/main/resources/codegen-resources/service-2.json b/services/stepfunctions/src/main/resources/codegen-resources/service-2.json index e9d32b3a8d47..59933d79dcb8 100755 --- a/services/stepfunctions/src/main/resources/codegen-resources/service-2.json +++ b/services/stepfunctions/src/main/resources/codegen-resources/service-2.json @@ -7,6 +7,7 @@ "protocol":"json", "serviceAbbreviation":"AWS SFN", "serviceFullName":"AWS Step Functions", + "serviceId":"SFN", "signatureVersion":"v4", "targetPrefix":"AWSStepFunctions", "uid":"states-2016-11-23" @@ -24,7 +25,7 @@ {"shape":"ActivityLimitExceeded"}, {"shape":"InvalidName"} ], - "documentation":"

Creates an activity.

", + "documentation":"

Creates an activity. An activity is a task which you write in any programming language and host on any machine which has access to AWS Step Functions. Activities must poll Step Functions using the GetActivityTask API action and respond using SendTask* API actions. This function lets Step Functions know the existence of your activity and returns an identifier for use in a state machine and when polling from the activity.

", "idempotent":true }, "CreateStateMachine":{ @@ -43,7 +44,7 @@ {"shape":"StateMachineDeleting"}, {"shape":"StateMachineLimitExceeded"} ], - "documentation":"

Creates a state machine.

", + "documentation":"

Creates a state machine. A state machine consists of a collection of states that can do work (Task states), determine to which states to transition next (Choice states), stop an execution with an error (Fail states), and so on. State machines are specified using a JSON-based, structured language.

", "idempotent":true }, "DeleteActivity":{ @@ -70,7 +71,7 @@ "errors":[ {"shape":"InvalidArn"} ], - "documentation":"

Deletes a state machine. This is an asynchronous operation-- it sets the state machine's status to \"DELETING\" and begins the delete process.

" + "documentation":"

Deletes a state machine. This is an asynchronous operation: It sets the state machine's status to DELETING and begins the deletion process. Each state machine execution is deleted the next time it makes a state transition.

The state machine itself is deleted after all executions are completed or deleted.

" }, "DescribeActivity":{ "name":"DescribeActivity", @@ -114,6 +115,20 @@ ], "documentation":"

Describes a state machine.

" }, + "DescribeStateMachineForExecution":{ + "name":"DescribeStateMachineForExecution", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeStateMachineForExecutionInput"}, + "output":{"shape":"DescribeStateMachineForExecutionOutput"}, + "errors":[ + {"shape":"ExecutionDoesNotExist"}, + {"shape":"InvalidArn"} + ], + "documentation":"

Describes the state machine associated with a specific execution.

" + }, "GetActivityTask":{ "name":"GetActivityTask", "http":{ @@ -127,7 +142,7 @@ {"shape":"ActivityWorkerLimitExceeded"}, {"shape":"InvalidArn"} ], - "documentation":"

Used by workers to retrieve a task (with the specified activity ARN) scheduled for execution by a running state machine. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available (i.e. an execution of a task of this type is needed.) The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll will return an empty result, that is, the taskToken returned is an empty string.

Workers should set their client side socket timeout to at least 65 seconds (5 seconds higher than the maximum time the service may hold the poll request).

" + "documentation":"

Used by workers to retrieve a task (with the specified activity ARN) which has been scheduled for execution by a running state machine. This initiates a long poll, where the service holds the HTTP connection open and responds as soon as a task becomes available (i.e. an execution of a task of this type is needed.) The maximum time the service holds on to the request before responding is 60 seconds. If no task is available within 60 seconds, the poll returns a taskToken with a null string.

Workers should set their client side socket timeout to at least 65 seconds (5 seconds higher than the maximum time the service may hold the poll request).

" }, "GetExecutionHistory":{ "name":"GetExecutionHistory", @@ -142,7 +157,7 @@ {"shape":"InvalidArn"}, {"shape":"InvalidToken"} ], - "documentation":"

Returns the history of the specified execution as a list of events. By default, the results are returned in ascending order of the timeStamp of the events. Use the reverseOrder parameter to get the latest events first. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextToken returned by the previous call.

" + "documentation":"

Returns the history of the specified execution as a list of events. By default, the results are returned in ascending order of the timeStamp of the events. Use the reverseOrder parameter to get the latest events first.

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

" }, "ListActivities":{ "name":"ListActivities", @@ -155,7 +170,7 @@ "errors":[ {"shape":"InvalidToken"} ], - "documentation":"

Lists the existing activities. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextToken returned by the previous call.

" + "documentation":"

Lists the existing activities.

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

" }, "ListExecutions":{ "name":"ListExecutions", @@ -170,7 +185,7 @@ {"shape":"InvalidToken"}, {"shape":"StateMachineDoesNotExist"} ], - "documentation":"

Lists the executions of a state machine that meet the filtering criteria. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextToken returned by the previous call.

" + "documentation":"

Lists the executions of a state machine that meet the filtering criteria.

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

" }, "ListStateMachines":{ "name":"ListStateMachines", @@ -183,7 +198,7 @@ "errors":[ {"shape":"InvalidToken"} ], - "documentation":"

Lists the existing state machines. The results may be split into multiple pages. To retrieve subsequent pages, make the call again using the nextToken returned by the previous call.

" + "documentation":"

Lists the existing state machines.

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

" }, "SendTaskFailure":{ "name":"SendTaskFailure", @@ -213,7 +228,7 @@ {"shape":"InvalidToken"}, {"shape":"TaskTimedOut"} ], - "documentation":"

Used by workers to report to the service that the task represented by the specified taskToken is still making progress. This action resets the Heartbeat clock. The Heartbeat threshold is specified in the state machine's Amazon States Language definition. This action does not in itself create an event in the execution history. However, if the task times out, the execution history will contain an ActivityTimedOut event.

The Timeout of a task, defined in the state machine's Amazon States Language definition, is its maximum allowed duration, regardless of the number of SendTaskHeartbeat requests received.

This operation is only useful for long-lived tasks to report the liveliness of the task.

" + "documentation":"

Used by workers to report to the service that the task represented by the specified taskToken is still making progress. This action resets the Heartbeat clock. The Heartbeat threshold is specified in the state machine's Amazon States Language definition. This action does not in itself create an event in the execution history. However, if the task times out, the execution history contains an ActivityTimedOut event.

The Timeout of a task, defined in the state machine's Amazon States Language definition, is its maximum allowed duration, regardless of the number of SendTaskHeartbeat requests received.

This operation is only useful for long-lived tasks to report the liveliness of the task.

" }, "SendTaskSuccess":{ "name":"SendTaskSuccess", @@ -264,6 +279,24 @@ {"shape":"InvalidArn"} ], "documentation":"

Stops an execution.

" + }, + "UpdateStateMachine":{ + "name":"UpdateStateMachine", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateStateMachineInput"}, + "output":{"shape":"UpdateStateMachineOutput"}, + "errors":[ + {"shape":"InvalidArn"}, + {"shape":"InvalidDefinition"}, + {"shape":"MissingRequiredParameter"}, + {"shape":"StateMachineDeleting"}, + {"shape":"StateMachineDoesNotExist"} + ], + "documentation":"

Updates an existing state machine by modifying its definition and/or roleArn. Running executions will continue to use the previous definition and roleArn.

All StartExecution calls within a few seconds will use the updated definition and roleArn. Executions started immediately after calling UpdateStateMachine may use the previous state machine definition and roleArn. You must include at least one of definition or roleArn or you will receive a MissingRequiredParameter error.

", + "idempotent":true } }, "shapes":{ @@ -286,7 +319,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about an activity which failed during an execution.

" }, "ActivityLimitExceeded":{ "type":"structure", @@ -314,13 +348,14 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the activity.

" + "documentation":"

The name of the activity.

A name must not contain:

" }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the activity was created.

" + "documentation":"

The date the activity is created.

" } - } + }, + "documentation":"

Contains details about an activity.

" }, "ActivityScheduleFailedEventDetails":{ "type":"structure", @@ -333,7 +368,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about an activity schedule failure which occurred during an execution.

" }, "ActivityScheduledEventDetails":{ "type":"structure", @@ -357,16 +393,18 @@ "documentation":"

The maximum allowed duration between two heartbeats for the activity task.

", "box":true } - } + }, + "documentation":"

Contains details about an activity scheduled during an execution.

" }, "ActivityStartedEventDetails":{ "type":"structure", "members":{ "workerName":{ "shape":"Identity", - "documentation":"

The name of the worker that the task was assigned to. These names are provided by the workers when calling GetActivityTask.

" + "documentation":"

The name of the worker that the task is assigned to. These names are provided by the workers when calling GetActivityTask.

" } - } + }, + "documentation":"

Contains details about the start of an activity during an execution.

" }, "ActivitySucceededEventDetails":{ "type":"structure", @@ -375,7 +413,8 @@ "shape":"Data", "documentation":"

The JSON data output by the activity task.

" } - } + }, + "documentation":"

Contains details about an activity which successfully terminated during an execution.

" }, "ActivityTimedOutEventDetails":{ "type":"structure", @@ -388,7 +427,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the timeout.

" } - } + }, + "documentation":"

Contains details about an activity timeout which occurred during an execution.

" }, "ActivityWorkerLimitExceeded":{ "type":"structure", @@ -414,7 +454,7 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

The name of the activity to create. This name must be unique for your AWS account and region.

" + "documentation":"

The name of the activity to create. This name must be unique for your AWS account and region for 90 days. For more information, see Limits Related to State Machine Executions in the AWS Step Functions Developer Guide.

A name must not contain:

" } } }, @@ -431,7 +471,7 @@ }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the activity was created.

" + "documentation":"

The date the activity is created.

" } } }, @@ -445,7 +485,7 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

The name of the state machine. This name must be unique for your AWS account and region.

" + "documentation":"

The name of the state machine. This name must be unique for your AWS account and region for 90 days. For more information, see Limits Related to State Machine Executions in the AWS Step Functions Developer Guide.

A name must not contain:

" }, "definition":{ "shape":"Definition", @@ -470,7 +510,7 @@ }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the state machine was created.

" + "documentation":"

The date the state machine is created.

" } } }, @@ -537,11 +577,11 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the activity.

" + "documentation":"

The name of the activity.

A name must not contain:

" }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the activity was created.

" + "documentation":"

The date the activity is created.

" } } }, @@ -575,7 +615,7 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the execution.

" + "documentation":"

The name of the execution.

A name must not contain:

" }, "status":{ "shape":"ExecutionStatus", @@ -583,7 +623,7 @@ }, "startDate":{ "shape":"Timestamp", - "documentation":"

The date the execution was started.

" + "documentation":"

The date the execution is started.

" }, "stopDate":{ "shape":"Timestamp", @@ -591,11 +631,53 @@ }, "input":{ "shape":"Data", - "documentation":"

The JSON input data of the execution.

" + "documentation":"

The string that contains the JSON input data of the execution.

" }, "output":{ "shape":"Data", - "documentation":"

The JSON output data of the execution.

" + "documentation":"

The JSON output data of the execution.

This field is set only if the execution succeeds. If the execution fails, this field is null.

" + } + } + }, + "DescribeStateMachineForExecutionInput":{ + "type":"structure", + "required":["executionArn"], + "members":{ + "executionArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the execution you want state machine information for.

" + } + } + }, + "DescribeStateMachineForExecutionOutput":{ + "type":"structure", + "required":[ + "stateMachineArn", + "name", + "definition", + "roleArn", + "updateDate" + ], + "members":{ + "stateMachineArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the state machine associated with the execution.

" + }, + "name":{ + "shape":"Name", + "documentation":"

The name of the state machine associated with the execution.

" + }, + "definition":{ + "shape":"Definition", + "documentation":"

The Amazon States Language definition of the state machine.

" + }, + "roleArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the IAM role of the State Machine for the execution.

" + }, + "updateDate":{ + "shape":"Timestamp", + "documentation":"

The date and time the state machine associated with an execution was updated. For a newly created state machine, this is the creation date.

" } } }, @@ -625,7 +707,7 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the state machine.

" + "documentation":"

The name of the state machine.

A name must not contain:

" }, "status":{ "shape":"StateMachineStatus", @@ -637,11 +719,11 @@ }, "roleArn":{ "shape":"Arn", - "documentation":"

The Amazon Resource Name (ARN) of the IAM role used for executing this state machine.

" + "documentation":"

The Amazon Resource Name (ARN) of the IAM role used when creating this state machine. (The IAM role maintains security by granting Step Functions access to AWS resources.)

" }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the state machine was created.

" + "documentation":"

The date the state machine is created.

" } } }, @@ -663,14 +745,15 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about an abort of an execution.

" }, "ExecutionAlreadyExists":{ "type":"structure", "members":{ "message":{"shape":"ErrorMessage"} }, - "documentation":"

An execution with the same name already exists.

", + "documentation":"

The execution has the same name as another execution (but a different input).

Executions with the same name and input are considered idempotent.

", "exception":true }, "ExecutionDoesNotExist":{ @@ -692,7 +775,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about an execution failure event.

" }, "ExecutionLimitExceeded":{ "type":"structure", @@ -726,7 +810,7 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the execution.

" + "documentation":"

The name of the execution.

A name must not contain:

" }, "status":{ "shape":"ExecutionStatus", @@ -740,7 +824,8 @@ "shape":"Timestamp", "documentation":"

If the execution already ended, the date the execution stopped.

" } - } + }, + "documentation":"

Contains details about an execution.

" }, "ExecutionStartedEventDetails":{ "type":"structure", @@ -753,7 +838,8 @@ "shape":"Arn", "documentation":"

The Amazon Resource Name (ARN) of the IAM role used for executing AWS Lambda tasks.

" } - } + }, + "documentation":"

Contains details about the start of the execution.

" }, "ExecutionStatus":{ "type":"string", @@ -772,7 +858,8 @@ "shape":"Data", "documentation":"

The JSON data output by the execution.

" } - } + }, + "documentation":"

Contains details about the successful termination of the execution.

" }, "ExecutionTimedOutEventDetails":{ "type":"structure", @@ -785,7 +872,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the timeout.

" } - } + }, + "documentation":"

Contains details about the execution timeout which occurred during the execution.

" }, "GetActivityTaskInput":{ "type":"structure", @@ -793,11 +881,11 @@ "members":{ "activityArn":{ "shape":"Arn", - "documentation":"

The Amazon Resource Name (ARN) of the activity to retrieve tasks from.

" + "documentation":"

The Amazon Resource Name (ARN) of the activity to retrieve tasks from (assigned when you create the task using CreateActivity.)

" }, "workerName":{ "shape":"Name", - "documentation":"

An arbitrary name may be provided in order to identify the worker that the task is assigned to. This name will be used when it is logged in the execution history.

" + "documentation":"

You can provide an arbitrary name in order to identify the worker that the task is assigned to. This name is used when it is logged in the execution history.

" } } }, @@ -810,7 +898,7 @@ }, "input":{ "shape":"Data", - "documentation":"

The JSON input data for the task.

" + "documentation":"

The string that contains the JSON input data for the task.

" } } }, @@ -824,7 +912,7 @@ }, "maxResults":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextToken can be used to obtain further pages of results. The default is 100 and the maximum allowed page size is 1000.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. You can use nextToken to obtain further pages of results. The default is 100 and the maximum allowed page size is 100. A value of 0 uses the default.

This is only an upper limit. The actual number of results returned per call might be fewer than the specified maximum.

" }, "reverseOrder":{ "shape":"ReverseOrder", @@ -832,7 +920,7 @@ }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -846,7 +934,7 @@ }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken is returned, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -860,7 +948,7 @@ "members":{ "timestamp":{ "shape":"Timestamp", - "documentation":"

The date the event occured.

" + "documentation":"

The date the event occurred.

" }, "type":{ "shape":"HistoryEventType", @@ -875,7 +963,10 @@ "documentation":"

The id of the previous event.

" }, "activityFailedEventDetails":{"shape":"ActivityFailedEventDetails"}, - "activityScheduleFailedEventDetails":{"shape":"ActivityScheduleFailedEventDetails"}, + "activityScheduleFailedEventDetails":{ + "shape":"ActivityScheduleFailedEventDetails", + "documentation":"

Contains details about an activity schedule event which failed during an execution.

" + }, "activityScheduledEventDetails":{"shape":"ActivityScheduledEventDetails"}, "activityStartedEventDetails":{"shape":"ActivityStartedEventDetails"}, "activitySucceededEventDetails":{"shape":"ActivitySucceededEventDetails"}, @@ -888,16 +979,24 @@ "lambdaFunctionFailedEventDetails":{"shape":"LambdaFunctionFailedEventDetails"}, "lambdaFunctionScheduleFailedEventDetails":{"shape":"LambdaFunctionScheduleFailedEventDetails"}, "lambdaFunctionScheduledEventDetails":{"shape":"LambdaFunctionScheduledEventDetails"}, - "lambdaFunctionStartFailedEventDetails":{"shape":"LambdaFunctionStartFailedEventDetails"}, - "lambdaFunctionSucceededEventDetails":{"shape":"LambdaFunctionSucceededEventDetails"}, + "lambdaFunctionStartFailedEventDetails":{ + "shape":"LambdaFunctionStartFailedEventDetails", + "documentation":"

Contains details about a lambda function which failed to start during an execution.

" + }, + "lambdaFunctionSucceededEventDetails":{ + "shape":"LambdaFunctionSucceededEventDetails", + "documentation":"

Contains details about a lambda function which terminated successfully during an execution.

" + }, "lambdaFunctionTimedOutEventDetails":{"shape":"LambdaFunctionTimedOutEventDetails"}, "stateEnteredEventDetails":{"shape":"StateEnteredEventDetails"}, "stateExitedEventDetails":{"shape":"StateExitedEventDetails"} - } + }, + "documentation":"

Contains details about the events of an execution.

" }, "HistoryEventList":{ "type":"list", - "member":{"shape":"HistoryEvent"} + "member":{"shape":"HistoryEvent"}, + "documentation":"

Contains details about the events which occurred during an execution.

" }, "HistoryEventType":{ "type":"string", @@ -925,12 +1024,18 @@ "LambdaFunctionTimedOut", "SucceedStateEntered", "SucceedStateExited", + "TaskStateAborted", "TaskStateEntered", "TaskStateExited", "PassStateEntered", "PassStateExited", + "ParallelStateAborted", "ParallelStateEntered", "ParallelStateExited", + "ParallelStateFailed", + "ParallelStateStarted", + "ParallelStateSucceeded", + "WaitStateAborted", "WaitStateEntered", "WaitStateExited" ] @@ -998,7 +1103,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about a lambda function which failed during an execution.

" }, "LambdaFunctionScheduleFailedEventDetails":{ "type":"structure", @@ -1011,7 +1117,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about a failed lambda function schedule event which occurred during an execution.

" }, "LambdaFunctionScheduledEventDetails":{ "type":"structure", @@ -1030,7 +1137,8 @@ "documentation":"

The maximum allowed duration of the lambda function.

", "box":true } - } + }, + "documentation":"

Contains details about a lambda function scheduled during an execution.

" }, "LambdaFunctionStartFailedEventDetails":{ "type":"structure", @@ -1043,7 +1151,8 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the failure.

" } - } + }, + "documentation":"

Contains details about a lambda function which failed to start during an execution.

" }, "LambdaFunctionSucceededEventDetails":{ "type":"structure", @@ -1052,7 +1161,8 @@ "shape":"Data", "documentation":"

The JSON data output by the lambda function.

" } - } + }, + "documentation":"

Contains details about a lambda function which successfully terminated during an execution.

" }, "LambdaFunctionTimedOutEventDetails":{ "type":"structure", @@ -1065,18 +1175,19 @@ "shape":"Cause", "documentation":"

A more detailed explanation of the cause of the timeout.

" } - } + }, + "documentation":"

Contains details about a lambda function timeout which occurred during an execution.

" }, "ListActivitiesInput":{ "type":"structure", "members":{ "maxResults":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextToken can be used to obtain further pages of results. The default is 100 and the maximum allowed page size is 1000.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. You can use nextToken to obtain further pages of results. The default is 100 and the maximum allowed page size is 100. A value of 0 uses the default.

This is only an upper limit. The actual number of results returned per call might be fewer than the specified maximum.

" }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -1090,7 +1201,7 @@ }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken is returned, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -1100,7 +1211,7 @@ "members":{ "stateMachineArn":{ "shape":"Arn", - "documentation":"

The Amazon Resource Name (ARN) of the state machine whose executions will be listed.

" + "documentation":"

The Amazon Resource Name (ARN) of the state machine whose executions is listed.

" }, "statusFilter":{ "shape":"ExecutionStatus", @@ -1108,11 +1219,11 @@ }, "maxResults":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextToken can be used to obtain further pages of results. The default is 100 and the maximum allowed page size is 1000.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. You can use nextToken to obtain further pages of results. The default is 100 and the maximum allowed page size is 100. A value of 0 uses the default.

This is only an upper limit. The actual number of results returned per call might be fewer than the specified maximum.

" }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -1126,7 +1237,7 @@ }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken is returned, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -1135,11 +1246,11 @@ "members":{ "maxResults":{ "shape":"PageSize", - "documentation":"

The maximum number of results that will be returned per call. nextToken can be used to obtain further pages of results. The default is 100 and the maximum allowed page size is 1000.

This is an upper limit only; the actual number of results returned per call may be fewer than the specified maximum.

" + "documentation":"

The maximum number of results that are returned per call. You can use nextToken to obtain further pages of results. The default is 100 and the maximum allowed page size is 100. A value of 0 uses the default.

This is only an upper limit. The actual number of results returned per call might be fewer than the specified maximum.

" }, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, @@ -1150,10 +1261,18 @@ "stateMachines":{"shape":"StateMachineList"}, "nextToken":{ "shape":"PageToken", - "documentation":"

If a nextToken is returned, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" + "documentation":"

If a nextToken is returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in nextToken. Keep all other arguments unchanged.

The configured maxResults determines how many results can be returned in a single call.

" } } }, + "MissingRequiredParameter":{ + "type":"structure", + "members":{ + "message":{"shape":"ErrorMessage"} + }, + "documentation":"

Request is missing a required parameter. This error occurs if both definition and roleArn are not specified.

", + "exception":true + }, "Name":{ "type":"string", "max":80, @@ -1199,7 +1318,7 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The token that represents this task. Task tokens are generated by the service when the tasks are assigned to a worker (see GetActivityTask::taskToken).

" + "documentation":"

The token that represents this task. Task tokens are generated by the service when the tasks are assigned to a worker (see GetActivityTaskOutput$taskToken).

" } } }, @@ -1217,7 +1336,7 @@ "members":{ "taskToken":{ "shape":"TaskToken", - "documentation":"

The token that represents this task. Task tokens are generated by the service when the tasks are assigned to a worker (see GetActivityTask::taskToken).

" + "documentation":"

The token that represents this task. Task tokens are generated by the service when the tasks are assigned to a worker (see GetActivityTaskOutput$taskToken).

" }, "output":{ "shape":"Data", @@ -1240,11 +1359,11 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the execution. This name must be unique for your AWS account and region.

" + "documentation":"

The name of the execution. This name must be unique for your AWS account and region for 90 days. For more information, see Limits Related to State Machine Executions in the AWS Step Functions Developer Guide.

An execution can't use the name of another execution for 90 days.

When you make multiple StartExecution calls with the same name, the new execution doesn't run and the following rules apply:

  • When the original execution is open and the execution input from the new call is different, the ExecutionAlreadyExists message is returned.

  • When the original execution is open and the execution input from the new call is identical, the Success message is returned.

  • When the original execution is closed, the ExecutionAlreadyExists message is returned regardless of input.

A name must not contain:

" }, "input":{ "shape":"Data", - "documentation":"

The JSON input data for the execution.

" + "documentation":"

The string that contains the JSON input data for the execution, for example:

\"input\": \"{\\\"first_name\\\" : \\\"test\\\"}\"

If you don't include any JSON input data, you still must include the two braces, for example: \"input\": \"{}\"

" } } }, @@ -1261,7 +1380,7 @@ }, "startDate":{ "shape":"Timestamp", - "documentation":"

The date the execution was started.

" + "documentation":"

The date the execution is started.

" } } }, @@ -1275,9 +1394,10 @@ }, "input":{ "shape":"Data", - "documentation":"

The JSON input data to the state.

" + "documentation":"

The string that contains the JSON input data for the state.

" } - } + }, + "documentation":"

Contains details about a state entered during an execution.

" }, "StateExitedEventDetails":{ "type":"structure", @@ -1285,13 +1405,14 @@ "members":{ "name":{ "shape":"Name", - "documentation":"

The name of the state.

" + "documentation":"

The name of the state.

A name must not contain:

" }, "output":{ "shape":"Data", "documentation":"

The JSON output data of the state.

" } - } + }, + "documentation":"

Contains details about an exit from a state during an execution.

" }, "StateMachineAlreadyExists":{ "type":"structure", @@ -1343,13 +1464,14 @@ }, "name":{ "shape":"Name", - "documentation":"

The name of the state machine.

" + "documentation":"

The name of the state machine.

A name must not contain:

" }, "creationDate":{ "shape":"Timestamp", - "documentation":"

The date the state machine was created.

" + "documentation":"

The date the state machine is created.

" } - } + }, + "documentation":"

Contains details about the state machine.

" }, "StateMachineStatus":{ "type":"string", @@ -1382,7 +1504,7 @@ "members":{ "stopDate":{ "shape":"Timestamp", - "documentation":"

The date the execution was stopped.

" + "documentation":"

The date the execution is stopped.

" } } }, @@ -1406,7 +1528,35 @@ "min":1 }, "TimeoutInSeconds":{"type":"long"}, - "Timestamp":{"type":"timestamp"} + "Timestamp":{"type":"timestamp"}, + "UpdateStateMachineInput":{ + "type":"structure", + "required":["stateMachineArn"], + "members":{ + "stateMachineArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the state machine.

" + }, + "definition":{ + "shape":"Definition", + "documentation":"

The Amazon States Language definition of the state machine.

" + }, + "roleArn":{ + "shape":"Arn", + "documentation":"

The Amazon Resource Name (ARN) of the IAM role of the state machine.

" + } + } + }, + "UpdateStateMachineOutput":{ + "type":"structure", + "required":["updateDate"], + "members":{ + "updateDate":{ + "shape":"Timestamp", + "documentation":"

The date and time the state machine was updated.

" + } + } + } }, - "documentation":"AWS Step Functions

AWS Step Functions is a web service that enables you to coordinate the components of distributed applications and microservices using visual workflows. You build applications from individual components that each perform a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a graphical console to visualize the components of your application as a series of steps. It automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected, every time. Step Functions logs the state of each step, so when things do go wrong, you can diagnose and debug problems quickly.

Step Functions manages the operations and underlying infrastructure for you to ensure your application is available at any scale. You can run tasks on the AWS cloud, on your own servers, or an any system that has access to AWS. Step Functions can be accessed and used with the Step Functions console, the AWS SDKs (included with your Beta release invitation email), or an HTTP API (the subject of this document).

" + "documentation":"AWS Step Functions

AWS Step Functions is a service that lets you coordinate the components of distributed applications and microservices using visual workflows.

You can use Step Functions to build applications from individual components, each of which performs a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a console that helps visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step, and retries steps when there are errors, so your application executes predictably and in the right order every time. Step Functions logs the state of each step, so you can quickly diagnose and debug any issues.

Step Functions manages operations and underlying infrastructure to ensure your application is available at any scale. You can run tasks on AWS, your own servers, or any system that has access to AWS. You can access and use Step Functions using the console, the AWS SDKs, or an HTTP API. For more information about Step Functions, see the AWS Step Functions Developer Guide .

" } diff --git a/services/storagegateway/src/main/resources/codegen-resources/service-2.json b/services/storagegateway/src/main/resources/codegen-resources/service-2.json index 94c1e2ff2490..632371d8abd0 100644 --- a/services/storagegateway/src/main/resources/codegen-resources/service-2.json +++ b/services/storagegateway/src/main/resources/codegen-resources/service-2.json @@ -193,7 +193,7 @@ {"shape":"InvalidGatewayRequestException"}, {"shape":"InternalServerError"} ], - "documentation":"

Creates a virtual tape by using your own barcode. You write data to the virtual tape and then archive the tape. This operation is only supported in the tape gateway architecture.

Cache storage must be allocated to the gateway before you can create a virtual tape. Use the AddCache operation to add cache storage to a gateway.

" + "documentation":"

Creates a virtual tape by using your own barcode. You write data to the virtual tape and then archive the tape. A barcode is unique and can not be reused if it has already been used on a tape . This applies to barcodes used on deleted tapes. This operation is only supported in the tape gateway. architecture.

Cache storage must be allocated to the gateway before you can create a virtual tape. Use the AddCache operation to add cache storage to a gateway.

" }, "CreateTapes":{ "name":"CreateTapes", @@ -669,7 +669,7 @@ {"shape":"InvalidGatewayRequestException"}, {"shape":"InternalServerError"} ], - "documentation":"

Refreshes the cache for the specified file share. This operation finds objects in the Amazon S3 bucket that were added or removed since the gateway last listed the bucket's contents and cached the results.

" + "documentation":"

Refreshes the cache for the specified file share. This operation finds objects in the Amazon S3 bucket that were added, removed or replaced since the gateway last listed the bucket's contents and cached the results.

" }, "RemoveTagsFromResource":{ "name":"RemoveTagsFromResource", @@ -1057,7 +1057,7 @@ }, "VolumeSizeInBytes":{ "shape":"long", - "documentation":"

The size of the volume in bytes.

" + "documentation":"

The size, in bytes, of the volume capacity.

" }, "VolumeProgress":{ "shape":"DoubleObject", @@ -1339,7 +1339,7 @@ }, "TargetName":{ "shape":"TargetName", - "documentation":"

The name of the iSCSI target used by initiators to connect to the target and as a suffix for the target ARN. For example, specifying TargetName as myvolume results in the target ARN of arn:aws:storagegateway:us-east-1:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:myvolume. The target name must be unique across all volumes of a gateway.

" + "documentation":"

The name of the iSCSI target used by initiators to connect to the target and as a suffix for the target ARN. For example, specifying TargetName as myvolume results in the target ARN of arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/target/iqn.1997-05.com.amazon:myvolume. The target name must be unique across all volumes of a gateway.

" }, "NetworkInterfaceId":{ "shape":"NetworkInterfaceId", @@ -1384,7 +1384,7 @@ }, "TapeBarcode":{ "shape":"TapeBarcode", - "documentation":"

The barcode that you want to assign to the tape.

" + "documentation":"

The barcode that you want to assign to the tape.

Barcodes cannot be reused. This includes barcodes used for tapes that have been deleted.

" } }, "documentation":"

CreateTapeWithBarcodeInput

" @@ -1509,6 +1509,10 @@ "FileShareARN":{ "shape":"FileShareARN", "documentation":"

The Amazon Resource Name (ARN) of the file share to be deleted.

" + }, + "ForceDelete":{ + "shape":"boolean", + "documentation":"

If set to true, deletes a file share immediately and aborts all data uploads to AWS. Otherwise the file share is not deleted until all data is uploaded to AWS. This process aborts the data upload process and the file share enters the FORCE_DELETING status.

" } }, "documentation":"

DeleteFileShareInput

" @@ -3008,7 +3012,7 @@ }, "TapeUsedInBytes":{ "shape":"TapeUsage", - "documentation":"

The size, in bytes, of data written to the virtual tape.

This value is not available for tapes created prior to May,13 2015.

" + "documentation":"

The size, in bytes, of data written to the virtual tape.

This value is not available for tapes created prior to May 13, 2015.

" } }, "documentation":"

Describes a virtual tape object.

" @@ -3054,7 +3058,7 @@ }, "TapeUsedInBytes":{ "shape":"TapeUsage", - "documentation":"

The size, in bytes, of data written to the virtual tape.

This value is not available for tapes created prior to May,13 2015.

" + "documentation":"

The size, in bytes, of data written to the virtual tape.

This value is not available for tapes created prior to May 13, 2015.

" } }, "documentation":"

Represents a virtual tape that is archived in the virtual tape shelf (VTS).

" @@ -3317,7 +3321,7 @@ }, "ReadOnly":{ "shape":"Boolean", - "documentation":"

Sets the write status of a file share: \"true\" if the write status is read-only, and otherwise \"false\".

" + "documentation":"

Sets the write status of a file share: \"true\" if the write status is read-only, otherwise \"false\".

" } }, "documentation":"

UpdateNFSFileShareInput

" @@ -3448,7 +3452,7 @@ "members":{ "VolumeARN":{ "shape":"VolumeARN", - "documentation":"

The Amazon Resource Name (ARN) for the storage volume. For example, the following is a valid ARN:

arn:aws:storagegateway:us-east-1:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABB

Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens (-).

" + "documentation":"

The Amazon Resource Name (ARN) for the storage volume. For example, the following is a valid ARN:

arn:aws:storagegateway:us-east-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABB

Valid Values: 50 to 500 lowercase letters, numbers, periods (.), and hyphens (-).

" }, "VolumeId":{ "shape":"VolumeId", @@ -3531,5 +3535,5 @@ "long":{"type":"long"}, "string":{"type":"string"} }, - "documentation":"AWS Storage Gateway Service

AWS Storage Gateway is the service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization's on-premises IT environment and AWS's storage infrastructure. The service enables you to securely upload data to the AWS cloud for cost effective backup and rapid disaster recovery.

Use the following links to get started using the AWS Storage Gateway Service API Reference:

AWS Storage Gateway resource IDs are in uppercase. When you use these resource IDs with the Amazon EC2 API, EC2 expects resource IDs in lowercase. You must change your resource ID to lowercase to use it with the EC2 API. For example, in Storage Gateway the ID for a volume might be vol-AA22BB012345DAF670. When you use this ID with the EC2 API, you must change it to vol-aa22bb012345daf670. Otherwise, the EC2 API might not behave as expected.

IDs for Storage Gateway volumes and Amazon EBS snapshots created from gateway volumes are changing to a longer format. Starting in December 2016, all new volumes and snapshots will be created with a 17-character string. Starting in April 2016, you will be able to use these longer IDs so you can test your systems with the new format. For more information, see Longer EC2 and EBS Resource IDs.

For example, a volume Amazon Resource Name (ARN) with the longer volume ID format looks like the following:

arn:aws:storagegateway:us-west-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABBCCDDEEFFG.

A snapshot ID with the longer ID format looks like the following: snap-78e226633445566ee.

For more information, see Announcement: Heads-up – Longer AWS Storage Gateway volume and snapshot IDs coming in 2016.

" + "documentation":"AWS Storage Gateway Service

AWS Storage Gateway is the service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization's on-premises IT environment and AWS's storage infrastructure. The service enables you to securely upload data to the AWS cloud for cost effective backup and rapid disaster recovery.

Use the following links to get started using the AWS Storage Gateway Service API Reference:

AWS Storage Gateway resource IDs are in uppercase. When you use these resource IDs with the Amazon EC2 API, EC2 expects resource IDs in lowercase. You must change your resource ID to lowercase to use it with the EC2 API. For example, in Storage Gateway the ID for a volume might be vol-AA22BB012345DAF670. When you use this ID with the EC2 API, you must change it to vol-aa22bb012345daf670. Otherwise, the EC2 API might not behave as expected.

IDs for Storage Gateway volumes and Amazon EBS snapshots created from gateway volumes are changing to a longer format. Starting in December 2016, all new volumes and snapshots will be created with a 17-character string. Starting in April 2016, you will be able to use these longer IDs so you can test your systems with the new format. For more information, see Longer EC2 and EBS Resource IDs.

For example, a volume Amazon Resource Name (ARN) with the longer volume ID format looks like the following:

arn:aws:storagegateway:us-west-2:111122223333:gateway/sgw-12A3456B/volume/vol-1122AABBCCDDEEFFG.

A snapshot ID with the longer ID format looks like the following: snap-78e226633445566ee.

For more information, see Announcement: Heads-up – Longer AWS Storage Gateway volume and snapshot IDs coming in 2016.

" } diff --git a/services/waf/src/main/resources/codegen-resources/waf-regional/service-2.json b/services/waf/src/main/resources/codegen-resources/waf-regional/service-2.json index 432c8a3694a6..24f0ae1491c5 100755 --- a/services/waf/src/main/resources/codegen-resources/waf-regional/service-2.json +++ b/services/waf/src/main/resources/codegen-resources/waf-regional/service-2.json @@ -47,6 +47,24 @@ ], "documentation":"

Creates a ByteMatchSet. You then use UpdateByteMatchSet to identify the part of a web request that you want AWS WAF to inspect, such as the values of the User-Agent header or the query string. For example, you can create a ByteMatchSet that matches any requests with User-Agent headers that contain the string BadBot. You can then configure AWS WAF to reject those requests.

To create and configure a ByteMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateByteMatchSet request.

  2. Submit a CreateByteMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateByteMatchSet request.

  4. Submit an UpdateByteMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "CreateGeoMatchSet":{ + "name":"CreateGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateGeoMatchSetRequest"}, + "output":{"shape":"CreateGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates an GeoMatchSet, which you use to specify which web requests you want to allow or block based on the country that the requests originate from. For example, if you're receiving a lot of requests from one or more countries and you want to block the requests, you can create an GeoMatchSet that contains those countries and then configure AWS WAF to block the requests.

To create and configure a GeoMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateGeoMatchSet request.

  2. Submit a CreateGeoMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateGeoMatchSet request.

  4. Submit an UpdateGeoMatchSetSet request to specify the countries that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "CreateIPSet":{ "name":"CreateIPSet", "http":{ @@ -82,6 +100,38 @@ ], "documentation":"

Creates a RateBasedRule. The RateBasedRule contains a RateLimit, which specifies the maximum number of requests that AWS WAF allows from a specified IP address in a five-minute period. The RateBasedRule also contains the IPSet objects, ByteMatchSet objects, and other predicates that identify the requests that you want to count or block if these requests exceed the RateLimit.

If you add more than one predicate to a RateBasedRule, a request not only must exceed the RateLimit, but it also must match all the specifications to be counted or blocked. For example, suppose you add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

You then add the RateBasedRule to a WebACL and specify that you want to block requests that meet the conditions in the rule. For a request to be blocked, it must come from the IP address 192.0.2.44 and the User-Agent header in the request must contain the value BadBot. Further, requests that match these two conditions must be received at a rate of more than 15,000 requests every five minutes. If both conditions are met and the rate is exceeded, AWS WAF blocks the requests. If the rate drops below 15,000 for a five-minute period, AWS WAF no longer blocks the requests.

As a second example, suppose you want to limit requests to a particular page on your site. To do this, you could add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

By adding this RateBasedRule to a WebACL, you could limit requests to your login page without affecting the rest of your site.

To create and configure a RateBasedRule, perform the following steps:

  1. Create and update the predicates that you want to include in the rule. For more information, see CreateByteMatchSet, CreateIPSet, and CreateSqlInjectionMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRule request.

  3. Submit a CreateRateBasedRule request.

  4. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRule request.

  5. Submit an UpdateRateBasedRule request to specify the predicates that you want to include in the rule.

  6. Create and update a WebACL that contains the RateBasedRule. For more information, see CreateWebACL.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "CreateRegexMatchSet":{ + "name":"CreateRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateRegexMatchSetRequest"}, + "output":{"shape":"CreateRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates a RegexMatchSet. You then use UpdateRegexMatchSet to identify the part of a web request that you want AWS WAF to inspect, such as the values of the User-Agent header or the query string. For example, you can create a RegexMatchSet that contains a RegexMatchTuple that looks for any requests with User-Agent headers that match a RegexPatternSet with pattern B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRegexMatchSet request.

  2. Submit a CreateRegexMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexMatchSet request.

  4. Submit an UpdateRegexMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value, using a RegexPatternSet, that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, + "CreateRegexPatternSet":{ + "name":"CreateRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateRegexPatternSetRequest"}, + "output":{"shape":"CreateRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates a RegexPatternSet. You then use UpdateRegexPatternSet to specify the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexPatternSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRegexPatternSet request.

  2. Submit a CreateRegexPatternSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexPatternSet request.

  4. Submit an UpdateRegexPatternSet request to specify the string that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "CreateRule":{ "name":"CreateRule", "http":{ @@ -189,6 +239,24 @@ ], "documentation":"

Permanently deletes a ByteMatchSet. You can't delete a ByteMatchSet if it's still used in any Rules or if it still includes any ByteMatchTuple objects (any filters).

If you just want to remove a ByteMatchSet from a Rule, use UpdateRule.

To permanently delete a ByteMatchSet, perform the following steps:

  1. Update the ByteMatchSet to remove filters, if any. For more information, see UpdateByteMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteByteMatchSet request.

  3. Submit a DeleteByteMatchSet request.

" }, + "DeleteGeoMatchSet":{ + "name":"DeleteGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteGeoMatchSetRequest"}, + "output":{"shape":"DeleteGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a GeoMatchSet. You can't delete a GeoMatchSet if it's still used in any Rules or if it still includes any countries.

If you just want to remove a GeoMatchSet from a Rule, use UpdateRule.

To permanently delete a GeoMatchSet from AWS WAF, perform the following steps:

  1. Update the GeoMatchSet to remove any countries. For more information, see UpdateGeoMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteGeoMatchSet request.

  3. Submit a DeleteGeoMatchSet request.

" + }, "DeleteIPSet":{ "name":"DeleteIPSet", "http":{ @@ -225,6 +293,42 @@ ], "documentation":"

Permanently deletes a RateBasedRule. You can't delete a rule if it's still used in any WebACL objects or if it still includes any predicates, such as ByteMatchSet objects.

If you just want to remove a rule from a WebACL, use UpdateWebACL.

To permanently delete a RateBasedRule from AWS WAF, perform the following steps:

  1. Update the RateBasedRule to remove predicates, if any. For more information, see UpdateRateBasedRule.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteRateBasedRule request.

  3. Submit a DeleteRateBasedRule request.

" }, + "DeleteRegexMatchSet":{ + "name":"DeleteRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteRegexMatchSetRequest"}, + "output":{"shape":"DeleteRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFStaleDataException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a RegexMatchSet. You can't delete a RegexMatchSet if it's still used in any Rules or if it still includes any RegexMatchTuples objects (any filters).

If you just want to remove a RegexMatchSet from a Rule, use UpdateRule.

To permanently delete a RegexMatchSet, perform the following steps:

  1. Update the RegexMatchSet to remove filters, if any. For more information, see UpdateRegexMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteRegexMatchSet request.

  3. Submit a DeleteRegexMatchSet request.

" + }, + "DeleteRegexPatternSet":{ + "name":"DeleteRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteRegexPatternSetRequest"}, + "output":{"shape":"DeleteRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFStaleDataException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a RegexPatternSet. You can't delete a RegexPatternSet if it's still used in any RegexMatchSet or if the RegexPatternSet is not empty.

" + }, "DeleteRule":{ "name":"DeleteRule", "http":{ @@ -373,6 +477,21 @@ ], "documentation":"

Returns the status of a ChangeToken that you got by calling GetChangeToken. ChangeTokenStatus is one of the following values:

" }, + "GetGeoMatchSet":{ + "name":"GetGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetGeoMatchSetRequest"}, + "output":{"shape":"GetGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the GeoMatchSet that is specified by GeoMatchSetId.

" + }, "GetIPSet":{ "name":"GetIPSet", "http":{ @@ -419,6 +538,36 @@ ], "documentation":"

Returns an array of IP addresses currently being blocked by the RateBasedRule that is specified by the RuleId. The maximum number of managed keys that will be blocked is 10,000. If more than 10,000 addresses exceed the rate limit, the 10,000 addresses with the highest rates will be blocked.

" }, + "GetRegexMatchSet":{ + "name":"GetRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetRegexMatchSetRequest"}, + "output":{"shape":"GetRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the RegexMatchSet specified by RegexMatchSetId.

" + }, + "GetRegexPatternSet":{ + "name":"GetRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetRegexPatternSetRequest"}, + "output":{"shape":"GetRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the RegexPatternSet specified by RegexPatternSetId.

" + }, "GetRule":{ "name":"GetRule", "http":{ @@ -539,6 +688,20 @@ ], "documentation":"

Returns an array of ByteMatchSetSummary objects.

" }, + "ListGeoMatchSets":{ + "name":"ListGeoMatchSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListGeoMatchSetsRequest"}, + "output":{"shape":"ListGeoMatchSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of GeoMatchSetSummary objects in the response.

" + }, "ListIPSets":{ "name":"ListIPSets", "http":{ @@ -567,6 +730,34 @@ ], "documentation":"

Returns an array of RuleSummary objects.

" }, + "ListRegexMatchSets":{ + "name":"ListRegexMatchSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListRegexMatchSetsRequest"}, + "output":{"shape":"ListRegexMatchSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of RegexMatchSetSummary objects.

" + }, + "ListRegexPatternSets":{ + "name":"ListRegexPatternSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListRegexPatternSetsRequest"}, + "output":{"shape":"ListRegexPatternSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of RegexPatternSetSummary objects.

" + }, "ListResourcesForWebACL":{ "name":"ListResourcesForWebACL", "http":{ @@ -672,6 +863,27 @@ ], "documentation":"

Inserts or deletes ByteMatchTuple objects (filters) in a ByteMatchSet. For each ByteMatchTuple object, you specify the following values:

For example, you can add a ByteMatchSetUpdate object that matches web requests in which User-Agent headers contain the string BadBot. You can then configure AWS WAF to block those requests.

To create and configure a ByteMatchSet, perform the following steps:

  1. Create a ByteMatchSet. For more information, see CreateByteMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateByteMatchSet request.

  3. Submit an UpdateByteMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "UpdateGeoMatchSet":{ + "name":"UpdateGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateGeoMatchSetRequest"}, + "output":{"shape":"UpdateGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Inserts or deletes GeoMatchConstraint objects in an GeoMatchSet. For each GeoMatchConstraint object, you specify the following values:

To create and configure an GeoMatchSet, perform the following steps:

  1. Submit a CreateGeoMatchSet request.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateGeoMatchSet request.

  3. Submit an UpdateGeoMatchSet request to specify the country that you want AWS WAF to watch for.

When you update an GeoMatchSet, you specify the country that you want to add and/or the country that you want to delete. If you want to change a country, you delete the existing country and add the new one.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "UpdateIPSet":{ "name":"UpdateIPSet", "http":{ @@ -714,6 +926,45 @@ ], "documentation":"

Inserts or deletes Predicate objects in a rule and updates the RateLimit in the rule.

Each Predicate object identifies a predicate, such as a ByteMatchSet or an IPSet, that specifies the web requests that you want to block or count. The RateLimit specifies the number of requests every five minutes that triggers the rule.

If you add more than one predicate to a RateBasedRule, a request must match all the predicates and exceed the RateLimit to be counted or blocked. For example, suppose you add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

You then add the RateBasedRule to a WebACL and specify that you want to block requests that satisfy the rule. For a request to be blocked, it must come from the IP address 192.0.2.44 and the User-Agent header in the request must contain the value BadBot. Further, requests that match these two conditions much be received at a rate of more than 15,000 every five minutes. If the rate drops below this limit, AWS WAF no longer blocks the requests.

As a second example, suppose you want to limit requests to a particular page on your site. To do this, you could add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

By adding this RateBasedRule to a WebACL, you could limit requests to your login page without affecting the rest of your site.

" }, + "UpdateRegexMatchSet":{ + "name":"UpdateRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateRegexMatchSetRequest"}, + "output":{"shape":"UpdateRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Inserts or deletes RegexMatchSetUpdate objects (filters) in a RegexMatchSet. For each RegexMatchSetUpdate object, you specify the following values:

For example, you can create a RegexPatternSet that matches any requests with User-Agent headers that contain the string B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexMatchSet, perform the following steps:

  1. Create a RegexMatchSet. For more information, see CreateRegexMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexMatchSet request.

  3. Submit an UpdateRegexMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the identifier of the RegexPatternSet that contain the regular expression patters you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, + "UpdateRegexPatternSet":{ + "name":"UpdateRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateRegexPatternSetRequest"}, + "output":{"shape":"UpdateRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFInvalidRegexPatternException"} + ], + "documentation":"

Inserts or deletes RegexMatchSetUpdate objects (filters) in a RegexPatternSet. For each RegexPatternSet object, you specify the following values:

For example, you can create a RegexPatternString such as B[a@]dB[o0]t. AWS WAF will match this RegexPatternString to:

To create and configure a RegexPatternSet, perform the following steps:

  1. Create a RegexPatternSet. For more information, see CreateRegexPatternSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexPatternSet request.

  3. Submit an UpdateRegexPatternSet request to specify the regular expression pattern that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "UpdateRule":{ "name":"UpdateRule", "http":{ @@ -937,7 +1188,8 @@ }, "ByteMatchSetUpdates":{ "type":"list", - "member":{"shape":"ByteMatchSetUpdate"} + "member":{"shape":"ByteMatchSetUpdate"}, + "min":1 }, "ByteMatchTargetString":{"type":"blob"}, "ByteMatchTuple":{ @@ -1033,6 +1285,36 @@ } } }, + "CreateGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change Name after you create the GeoMatchSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "GeoMatchSet":{ + "shape":"GeoMatchSet", + "documentation":"

The GeoMatchSet returned in the CreateGeoMatchSet response. The GeoMatchSet contains no GeoMatchConstraints.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "CreateIPSetRequest":{ "type":"structure", "required":[ @@ -1108,6 +1390,66 @@ } } }, + "CreateRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "RegexMatchSet":{ + "shape":"RegexMatchSet", + "documentation":"

A RegexMatchSet that contains no RegexMatchTuple objects.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "CreateRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "RegexPatternSet":{ + "shape":"RegexPatternSet", + "documentation":"

A RegexPatternSet that contains no objects.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "CreateRuleRequest":{ "type":"structure", "required":[ @@ -1303,6 +1645,32 @@ } } }, + "DeleteGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "ChangeToken" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetID of the GeoMatchSet that you want to delete. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "DeleteIPSetRequest":{ "type":"structure", "required":[ @@ -1355,6 +1723,58 @@ } } }, + "DeleteRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "ChangeToken" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to delete. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "DeleteRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "ChangeToken" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to delete. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "DeleteRuleRequest":{ "type":"structure", "required":[ @@ -1519,6 +1939,353 @@ }, "documentation":"

Specifies where in a web request to look for TargetString.

" }, + "GeoMatchConstraint":{ + "type":"structure", + "required":[ + "Type", + "Value" + ], + "members":{ + "Type":{ + "shape":"GeoMatchConstraintType", + "documentation":"

The type of geographical area you want AWS WAF to search for. Currently Country is the only valid value.

" + }, + "Value":{ + "shape":"GeoMatchConstraintValue", + "documentation":"

The country that you want AWS WAF to search for.

" + } + }, + "documentation":"

The country from which web requests originate that you want AWS WAF to search for.

" + }, + "GeoMatchConstraintType":{ + "type":"string", + "enum":["Country"] + }, + "GeoMatchConstraintValue":{ + "type":"string", + "enum":[ + "AF", + "AX", + "AL", + "DZ", + "AS", + "AD", + "AO", + "AI", + "AQ", + "AG", + "AR", + "AM", + "AW", + "AU", + "AT", + "AZ", + "BS", + "BH", + "BD", + "BB", + "BY", + "BE", + "BZ", + "BJ", + "BM", + "BT", + "BO", + "BQ", + "BA", + "BW", + "BV", + "BR", + "IO", + "BN", + "BG", + "BF", + "BI", + "KH", + "CM", + "CA", + "CV", + "KY", + "CF", + "TD", + "CL", + "CN", + "CX", + "CC", + "CO", + "KM", + "CG", + "CD", + "CK", + "CR", + "CI", + "HR", + "CU", + "CW", + "CY", + "CZ", + "DK", + "DJ", + "DM", + "DO", + "EC", + "EG", + "SV", + "GQ", + "ER", + "EE", + "ET", + "FK", + "FO", + "FJ", + "FI", + "FR", + "GF", + "PF", + "TF", + "GA", + "GM", + "GE", + "DE", + "GH", + "GI", + "GR", + "GL", + "GD", + "GP", + "GU", + "GT", + "GG", + "GN", + "GW", + "GY", + "HT", + "HM", + "VA", + "HN", + "HK", + "HU", + "IS", + "IN", + "ID", + "IR", + "IQ", + "IE", + "IM", + "IL", + "IT", + "JM", + "JP", + "JE", + "JO", + "KZ", + "KE", + "KI", + "KP", + "KR", + "KW", + "KG", + "LA", + "LV", + "LB", + "LS", + "LR", + "LY", + "LI", + "LT", + "LU", + "MO", + "MK", + "MG", + "MW", + "MY", + "MV", + "ML", + "MT", + "MH", + "MQ", + "MR", + "MU", + "YT", + "MX", + "FM", + "MD", + "MC", + "MN", + "ME", + "MS", + "MA", + "MZ", + "MM", + "NA", + "NR", + "NP", + "NL", + "NC", + "NZ", + "NI", + "NE", + "NG", + "NU", + "NF", + "MP", + "NO", + "OM", + "PK", + "PW", + "PS", + "PA", + "PG", + "PY", + "PE", + "PH", + "PN", + "PL", + "PT", + "PR", + "QA", + "RE", + "RO", + "RU", + "RW", + "BL", + "SH", + "KN", + "LC", + "MF", + "PM", + "VC", + "WS", + "SM", + "ST", + "SA", + "SN", + "RS", + "SC", + "SL", + "SG", + "SX", + "SK", + "SI", + "SB", + "SO", + "ZA", + "GS", + "SS", + "ES", + "LK", + "SD", + "SR", + "SJ", + "SZ", + "SE", + "CH", + "SY", + "TW", + "TJ", + "TZ", + "TH", + "TL", + "TG", + "TK", + "TO", + "TT", + "TN", + "TR", + "TM", + "TC", + "TV", + "UG", + "UA", + "AE", + "GB", + "US", + "UM", + "UY", + "UZ", + "VU", + "VE", + "VN", + "VG", + "VI", + "WF", + "EH", + "YE", + "ZM", + "ZW" + ] + }, + "GeoMatchConstraints":{ + "type":"list", + "member":{"shape":"GeoMatchConstraint"} + }, + "GeoMatchSet":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "GeoMatchConstraints" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId for an GeoMatchSet. You use GeoMatchSetId to get information about a GeoMatchSet (see GeoMatchSet), update a GeoMatchSet (see UpdateGeoMatchSet), insert a GeoMatchSet into a Rule or delete one from a Rule (see UpdateRule), and delete a GeoMatchSet from AWS WAF (see DeleteGeoMatchSet).

GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change the name of an GeoMatchSet after you create it.

" + }, + "GeoMatchConstraints":{ + "shape":"GeoMatchConstraints", + "documentation":"

An array of GeoMatchConstraint objects, which contain the country that you want AWS WAF to search for.

" + } + }, + "documentation":"

Contains one or more countries that AWS WAF will search for.

" + }, + "GeoMatchSetSummaries":{ + "type":"list", + "member":{"shape":"GeoMatchSetSummary"} + }, + "GeoMatchSetSummary":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "Name" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId for an GeoMatchSet. You can use GeoMatchSetId in a GetGeoMatchSet request to get detailed information about an GeoMatchSet.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change the name of an GeoMatchSet after you create it.

" + } + }, + "documentation":"

Contains the identifier and the name of the GeoMatchSet.

" + }, + "GeoMatchSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "GeoMatchConstraint" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a country with UpdateGeoMatchSet.

" + }, + "GeoMatchConstraint":{ + "shape":"GeoMatchConstraint", + "documentation":"

The country from which web requests originate that you want AWS WAF to search for.

" + } + }, + "documentation":"

Specifies the type of update to perform to an GeoMatchSet with UpdateGeoMatchSet.

" + }, + "GeoMatchSetUpdates":{ + "type":"list", + "member":{"shape":"GeoMatchSetUpdate"}, + "min":1 + }, "GetByteMatchSetRequest":{ "type":"structure", "required":["ByteMatchSetId"], @@ -1571,6 +2338,25 @@ } } }, + "GetGeoMatchSetRequest":{ + "type":"structure", + "required":["GeoMatchSetId"], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId of the GeoMatchSet that you want to get. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + } + } + }, + "GetGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "GeoMatchSet":{ + "shape":"GeoMatchSet", + "documentation":"

Information about the GeoMatchSet that you specified in the GetGeoMatchSet request. This includes the Type, which for a GeoMatchContraint is always Country, as well as the Value, which is the identifier for a specific country.

" + } + } + }, "GetIPSetRequest":{ "type":"structure", "required":["IPSetId"], @@ -1636,6 +2422,44 @@ } } }, + "GetRegexMatchSetRequest":{ + "type":"structure", + "required":["RegexMatchSetId"], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to get. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + } + } + }, + "GetRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "RegexMatchSet":{ + "shape":"RegexMatchSet", + "documentation":"

Information about the RegexMatchSet that you specified in the GetRegexMatchSet request. For more information, see RegexMatchTuple.

" + } + } + }, + "GetRegexPatternSetRequest":{ + "type":"structure", + "required":["RegexPatternSetId"], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to get. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + } + } + }, + "GetRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "RegexPatternSet":{ + "shape":"RegexPatternSet", + "documentation":"

Information about the RegexPatternSet that you specified in the GetRegexPatternSet request, including the identifier of the pattern set and the regular expression patterns you want AWS WAF to search for.

" + } + } + }, "GetRuleRequest":{ "type":"structure", "required":["RuleId"], @@ -1949,7 +2773,8 @@ }, "IPSetUpdates":{ "type":"list", - "member":{"shape":"IPSetUpdate"} + "member":{"shape":"IPSetUpdate"}, + "min":1 }, "IPString":{"type":"string"}, "ListByteMatchSetsRequest":{ @@ -1978,12 +2803,38 @@ } } }, + "ListGeoMatchSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more GeoMatchSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of GeoMatchSet objects. For the second and subsequent ListGeoMatchSets requests, specify the value of NextMarker from the previous response to get information about another batch of GeoMatchSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of GeoMatchSet objects that you want AWS WAF to return for this request. If you have more GeoMatchSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of GeoMatchSet objects.

" + } + } + }, + "ListGeoMatchSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more GeoMatchSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more GeoMatchSet objects, submit another ListGeoMatchSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "GeoMatchSets":{ + "shape":"GeoMatchSetSummaries", + "documentation":"

An array of GeoMatchSetSummary objects.

" + } + } + }, "ListIPSetsRequest":{ "type":"structure", "members":{ "NextMarker":{ "shape":"NextMarker", - "documentation":"

If you specify a value for Limit and you have more IPSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of IPSets. For the second and subsequent ListIPSets requests, specify the value of NextMarker from the previous response to get information about another batch of ByteMatchSets.

" + "documentation":"

If you specify a value for Limit and you have more IPSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of IPSets. For the second and subsequent ListIPSets requests, specify the value of NextMarker from the previous response to get information about another batch of IPSets.

" }, "Limit":{ "shape":"PaginationLimit", @@ -2030,6 +2881,58 @@ } } }, + "ListRegexMatchSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more RegexMatchSet objects than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of ByteMatchSets. For the second and subsequent ListRegexMatchSets requests, specify the value of NextMarker from the previous response to get information about another batch of RegexMatchSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of RegexMatchSet objects that you want AWS WAF to return for this request. If you have more RegexMatchSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of RegexMatchSet objects.

" + } + } + }, + "ListRegexMatchSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more RegexMatchSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more RegexMatchSet objects, submit another ListRegexMatchSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "RegexMatchSets":{ + "shape":"RegexMatchSetSummaries", + "documentation":"

An array of RegexMatchSetSummary objects.

" + } + } + }, + "ListRegexPatternSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more RegexPatternSet objects than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of RegexPatternSet objects. For the second and subsequent ListRegexPatternSets requests, specify the value of NextMarker from the previous response to get information about another batch of RegexPatternSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of RegexPatternSet objects that you want AWS WAF to return for this request. If you have more RegexPatternSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of RegexPatternSet objects.

" + } + } + }, + "ListRegexPatternSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more RegexPatternSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more RegexPatternSet objects, submit another ListRegexPatternSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "RegexPatternSets":{ + "shape":"RegexPatternSetSummaries", + "documentation":"

An array of RegexPatternSetSummary objects.

" + } + } + }, "ListResourcesForWebACLRequest":{ "type":"structure", "required":["WebACLId"], @@ -2222,6 +3125,8 @@ "BYTE_MATCH_TEXT_TRANSFORMATION", "BYTE_MATCH_POSITIONAL_CONSTRAINT", "SIZE_CONSTRAINT_COMPARISON_OPERATOR", + "GEO_MATCH_LOCATION_TYPE", + "GEO_MATCH_LOCATION_VALUE", "RATE_KEY", "RULE_TYPE", "NEXT_MARKER" @@ -2259,7 +3164,7 @@ "members":{ "Negated":{ "shape":"Negated", - "documentation":"

Set Negated to False if you want AWS WAF to allow, block, or count requests based on the settings in the specified ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow or block requests based on that IP address.

Set Negated to True if you want AWS WAF to allow or block a request based on the negation of the settings in the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44.

" + "documentation":"

Set Negated to False if you want AWS WAF to allow, block, or count requests based on the settings in the specified ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow or block requests based on that IP address.

Set Negated to True if you want AWS WAF to allow or block a request based on the negation of the settings in the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44.

" }, "Type":{ "shape":"PredicateType", @@ -2270,7 +3175,7 @@ "documentation":"

A unique identifier for a predicate in a Rule, such as ByteMatchSetId or IPSetId. The ID is returned by the corresponding Create or List command.

" } }, - "documentation":"

Specifies the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, and SizeConstraintSet objects that you want to add to a Rule and, for each object, indicates whether you want to negate the settings, for example, requests that do NOT originate from the IP address 192.0.2.44.

" + "documentation":"

Specifies the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, and SizeConstraintSet objects that you want to add to a Rule and, for each object, indicates whether you want to negate the settings, for example, requests that do NOT originate from the IP address 192.0.2.44.

" }, "PredicateType":{ "type":"string", @@ -2278,8 +3183,10 @@ "IPMatch", "ByteMatch", "SqlInjectionMatch", + "GeoMatch", "SizeConstraint", - "XssMatch" + "XssMatch", + "RegexMatch" ] }, "Predicates":{ @@ -2330,6 +3237,172 @@ "type":"long", "min":2000 }, + "RegexMatchSet":{ + "type":"structure", + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId for a RegexMatchSet. You use RegexMatchSetId to get information about a RegexMatchSet (see GetRegexMatchSet), update a RegexMatchSet (see UpdateRegexMatchSet), insert a RegexMatchSet into a Rule or delete one from a Rule (see UpdateRule), and delete a RegexMatchSet from AWS WAF (see DeleteRegexMatchSet).

RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + }, + "RegexMatchTuples":{ + "shape":"RegexMatchTuples", + "documentation":"

Contains an array of RegexMatchTuple objects. Each RegexMatchTuple object contains:

" + } + }, + "documentation":"

In a GetRegexMatchSet request, RegexMatchSet is a complex type that contains the RegexMatchSetId and Name of a RegexMatchSet, and the values that you specified when you updated the RegexMatchSet.

The values are contained in a RegexMatchTuple object, which specify the parts of web requests that you want AWS WAF to inspect and the values that you want AWS WAF to search for. If a RegexMatchSet contains more than one RegexMatchTuple object, a request needs to match the settings in only one ByteMatchTuple to be considered a match.

" + }, + "RegexMatchSetSummaries":{ + "type":"list", + "member":{"shape":"RegexMatchSetSummary"} + }, + "RegexMatchSetSummary":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "Name" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId for a RegexMatchSet. You use RegexMatchSetId to get information about a RegexMatchSet, update a RegexMatchSet, remove a RegexMatchSet from a Rule, and delete a RegexMatchSet from AWS WAF.

RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + } + }, + "documentation":"

Returned by ListRegexMatchSets. Each RegexMatchSetSummary object includes the Name and RegexMatchSetId for one RegexMatchSet.

" + }, + "RegexMatchSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "RegexMatchTuple" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a RegexMatchTuple.

" + }, + "RegexMatchTuple":{ + "shape":"RegexMatchTuple", + "documentation":"

Information about the part of a web request that you want AWS WAF to inspect and the identifier of the regular expression (regex) pattern that you want AWS WAF to search for. If you specify DELETE for the value of Action, the RegexMatchTuple values must exactly match the values in the RegexMatchTuple that you want to delete from the RegexMatchSet.

" + } + }, + "documentation":"

In an UpdateRegexMatchSet request, RegexMatchSetUpdate specifies whether to insert or delete a RegexMatchTuple and includes the settings for the RegexMatchTuple.

" + }, + "RegexMatchSetUpdates":{ + "type":"list", + "member":{"shape":"RegexMatchSetUpdate"}, + "min":1 + }, + "RegexMatchTuple":{ + "type":"structure", + "required":[ + "FieldToMatch", + "TextTransformation", + "RegexPatternSetId" + ], + "members":{ + "FieldToMatch":{ + "shape":"FieldToMatch", + "documentation":"

Specifies where in a web request to look for the RegexPatternSet.

" + }, + "TextTransformation":{ + "shape":"TextTransformation", + "documentation":"

Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation on RegexPatternSet before inspecting a request for a match.

CMD_LINE

When you're concerned that attackers are injecting an operating system commandline command and using unusual formatting to disguise some or all of the command, use this option to perform the following transformations:

COMPRESS_WHITE_SPACE

Use this option to replace the following characters with a space character (decimal 32):

COMPRESS_WHITE_SPACE also replaces multiple spaces with one space.

HTML_ENTITY_DECODE

Use this option to replace HTML-encoded characters with unencoded characters. HTML_ENTITY_DECODE performs the following operations:

LOWERCASE

Use this option to convert uppercase letters (A-Z) to lowercase (a-z).

URL_DECODE

Use this option to decode a URL-encoded value.

NONE

Specify NONE if you don't want to perform any text transformations.

" + }, + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId for a RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet (see GetRegexPatternSet), update a RegexPatternSet (see UpdateRegexPatternSet), insert a RegexPatternSet into a RegexMatchSet or delete one from a RegexMatchSet (see UpdateRegexMatchSet), and delete an RegexPatternSet from AWS WAF (see DeleteRegexPatternSet).

RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + } + }, + "documentation":"

The regular expression pattern that you want AWS WAF to search for in web requests, the location in requests that you want AWS WAF to search, and other settings. Each RegexMatchTuple object contains:

" + }, + "RegexMatchTuples":{ + "type":"list", + "member":{"shape":"RegexMatchTuple"} + }, + "RegexPatternSet":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "RegexPatternStrings" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The identifier for the RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet, update a RegexPatternSet, remove a RegexPatternSet from a RegexMatchSet, and delete a RegexPatternSet from AWS WAF.

RegexMatchSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + }, + "RegexPatternStrings":{ + "shape":"RegexPatternStrings", + "documentation":"

Specifies the regular expression (regex) patterns that you want AWS WAF to search for, such as B[a@]dB[o0]t.

" + } + }, + "documentation":"

The RegexPatternSet specifies the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

" + }, + "RegexPatternSetSummaries":{ + "type":"list", + "member":{"shape":"RegexPatternSetSummary"} + }, + "RegexPatternSetSummary":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "Name" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId for a RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet, update a RegexPatternSet, remove a RegexPatternSet from a RegexMatchSet, and delete a RegexPatternSet from AWS WAF.

RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + } + }, + "documentation":"

Returned by ListRegexPatternSets. Each RegexPatternSetSummary object includes the Name and RegexPatternSetId for one RegexPatternSet.

" + }, + "RegexPatternSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "RegexPatternString" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a RegexPatternString.

" + }, + "RegexPatternString":{ + "shape":"RegexPatternString", + "documentation":"

Specifies the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t.

" + } + }, + "documentation":"

In an UpdateRegexPatternSet request, RegexPatternSetUpdate specifies whether to insert or delete a RegexPatternString and includes the settings for the RegexPatternString.

" + }, + "RegexPatternSetUpdates":{ + "type":"list", + "member":{"shape":"RegexPatternSetUpdate"}, + "min":1 + }, + "RegexPatternString":{ + "type":"string", + "min":1 + }, + "RegexPatternStrings":{ + "type":"list", + "member":{"shape":"RegexPatternString"}, + "max":10 + }, "ResourceArn":{ "type":"string", "max":1224, @@ -2551,7 +3624,8 @@ }, "SizeConstraintSetUpdates":{ "type":"list", - "member":{"shape":"SizeConstraintSetUpdate"} + "member":{"shape":"SizeConstraintSetUpdate"}, + "min":1 }, "SizeConstraints":{ "type":"list", @@ -2621,7 +3695,8 @@ }, "SqlInjectionMatchSetUpdates":{ "type":"list", - "member":{"shape":"SqlInjectionMatchSetUpdate"} + "member":{"shape":"SqlInjectionMatchSetUpdate"}, + "min":1 }, "SqlInjectionMatchTuple":{ "type":"structure", @@ -2707,6 +3782,37 @@ } } }, + "UpdateGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "ChangeToken", + "Updates" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId of the GeoMatchSet that you want to update. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + }, + "Updates":{ + "shape":"GeoMatchSetUpdates", + "documentation":"

An array of GeoMatchSetUpdate objects that you want to insert into or delete from an GeoMatchSet. For more information, see the applicable data types:

" + } + } + }, + "UpdateGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "UpdateIPSetRequest":{ "type":"structure", "required":[ @@ -2774,6 +3880,68 @@ } } }, + "UpdateRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "Updates", + "ChangeToken" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to update. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Updates":{ + "shape":"RegexMatchSetUpdates", + "documentation":"

An array of RegexMatchSetUpdate objects that you want to insert into or delete from a RegexMatchSet. For more information, see RegexMatchTuple.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "UpdateRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "UpdateRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "Updates", + "ChangeToken" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to update. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Updates":{ + "shape":"RegexPatternSetUpdates", + "documentation":"

An array of RegexPatternSetUpdate objects that you want to insert into or delete from a RegexPatternSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "UpdateRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "UpdateRuleRequest":{ "type":"structure", "required":[ @@ -2975,7 +4143,15 @@ "parameter":{"shape":"ParameterExceptionParameter"}, "reason":{"shape":"ParameterExceptionReason"} }, - "documentation":"

The operation failed because AWS WAF didn't recognize a parameter in the request. For example:

", + "documentation":"

The operation failed because AWS WAF didn't recognize a parameter in the request. For example:

", + "exception":true + }, + "WAFInvalidRegexPatternException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"

The regular expression (regex) you specified in RegexPatternString is invalid.

", "exception":true }, "WAFLimitsExceededException":{ @@ -3199,7 +4375,8 @@ }, "XssMatchSetUpdates":{ "type":"list", - "member":{"shape":"XssMatchSetUpdate"} + "member":{"shape":"XssMatchSetUpdate"}, + "min":1 }, "XssMatchTuple":{ "type":"structure", diff --git a/services/waf/src/main/resources/codegen-resources/waf/service-2.json b/services/waf/src/main/resources/codegen-resources/waf/service-2.json index 6d06e8226fc2..cb6b7958ae65 100755 --- a/services/waf/src/main/resources/codegen-resources/waf/service-2.json +++ b/services/waf/src/main/resources/codegen-resources/waf/service-2.json @@ -30,6 +30,24 @@ ], "documentation":"

Creates a ByteMatchSet. You then use UpdateByteMatchSet to identify the part of a web request that you want AWS WAF to inspect, such as the values of the User-Agent header or the query string. For example, you can create a ByteMatchSet that matches any requests with User-Agent headers that contain the string BadBot. You can then configure AWS WAF to reject those requests.

To create and configure a ByteMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateByteMatchSet request.

  2. Submit a CreateByteMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateByteMatchSet request.

  4. Submit an UpdateByteMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "CreateGeoMatchSet":{ + "name":"CreateGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateGeoMatchSetRequest"}, + "output":{"shape":"CreateGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates an GeoMatchSet, which you use to specify which web requests you want to allow or block based on the country that the requests originate from. For example, if you're receiving a lot of requests from one or more countries and you want to block the requests, you can create an GeoMatchSet that contains those countries and then configure AWS WAF to block the requests.

To create and configure a GeoMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateGeoMatchSet request.

  2. Submit a CreateGeoMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateGeoMatchSet request.

  4. Submit an UpdateGeoMatchSetSet request to specify the countries that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "CreateIPSet":{ "name":"CreateIPSet", "http":{ @@ -65,6 +83,38 @@ ], "documentation":"

Creates a RateBasedRule. The RateBasedRule contains a RateLimit, which specifies the maximum number of requests that AWS WAF allows from a specified IP address in a five-minute period. The RateBasedRule also contains the IPSet objects, ByteMatchSet objects, and other predicates that identify the requests that you want to count or block if these requests exceed the RateLimit.

If you add more than one predicate to a RateBasedRule, a request not only must exceed the RateLimit, but it also must match all the specifications to be counted or blocked. For example, suppose you add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

You then add the RateBasedRule to a WebACL and specify that you want to block requests that meet the conditions in the rule. For a request to be blocked, it must come from the IP address 192.0.2.44 and the User-Agent header in the request must contain the value BadBot. Further, requests that match these two conditions must be received at a rate of more than 15,000 requests every five minutes. If both conditions are met and the rate is exceeded, AWS WAF blocks the requests. If the rate drops below 15,000 for a five-minute period, AWS WAF no longer blocks the requests.

As a second example, suppose you want to limit requests to a particular page on your site. To do this, you could add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

By adding this RateBasedRule to a WebACL, you could limit requests to your login page without affecting the rest of your site.

To create and configure a RateBasedRule, perform the following steps:

  1. Create and update the predicates that you want to include in the rule. For more information, see CreateByteMatchSet, CreateIPSet, and CreateSqlInjectionMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRule request.

  3. Submit a CreateRateBasedRule request.

  4. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRule request.

  5. Submit an UpdateRateBasedRule request to specify the predicates that you want to include in the rule.

  6. Create and update a WebACL that contains the RateBasedRule. For more information, see CreateWebACL.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "CreateRegexMatchSet":{ + "name":"CreateRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateRegexMatchSetRequest"}, + "output":{"shape":"CreateRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates a RegexMatchSet. You then use UpdateRegexMatchSet to identify the part of a web request that you want AWS WAF to inspect, such as the values of the User-Agent header or the query string. For example, you can create a RegexMatchSet that contains a RegexMatchTuple that looks for any requests with User-Agent headers that match a RegexPatternSet with pattern B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexMatchSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRegexMatchSet request.

  2. Submit a CreateRegexMatchSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexMatchSet request.

  4. Submit an UpdateRegexMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value, using a RegexPatternSet, that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, + "CreateRegexPatternSet":{ + "name":"CreateRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateRegexPatternSetRequest"}, + "output":{"shape":"CreateRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Creates a RegexPatternSet. You then use UpdateRegexPatternSet to specify the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexPatternSet, perform the following steps:

  1. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a CreateRegexPatternSet request.

  2. Submit a CreateRegexPatternSet request.

  3. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexPatternSet request.

  4. Submit an UpdateRegexPatternSet request to specify the string that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "CreateRule":{ "name":"CreateRule", "http":{ @@ -172,6 +222,24 @@ ], "documentation":"

Permanently deletes a ByteMatchSet. You can't delete a ByteMatchSet if it's still used in any Rules or if it still includes any ByteMatchTuple objects (any filters).

If you just want to remove a ByteMatchSet from a Rule, use UpdateRule.

To permanently delete a ByteMatchSet, perform the following steps:

  1. Update the ByteMatchSet to remove filters, if any. For more information, see UpdateByteMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteByteMatchSet request.

  3. Submit a DeleteByteMatchSet request.

" }, + "DeleteGeoMatchSet":{ + "name":"DeleteGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteGeoMatchSetRequest"}, + "output":{"shape":"DeleteGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a GeoMatchSet. You can't delete a GeoMatchSet if it's still used in any Rules or if it still includes any countries.

If you just want to remove a GeoMatchSet from a Rule, use UpdateRule.

To permanently delete a GeoMatchSet from AWS WAF, perform the following steps:

  1. Update the GeoMatchSet to remove any countries. For more information, see UpdateGeoMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteGeoMatchSet request.

  3. Submit a DeleteGeoMatchSet request.

" + }, "DeleteIPSet":{ "name":"DeleteIPSet", "http":{ @@ -208,6 +276,42 @@ ], "documentation":"

Permanently deletes a RateBasedRule. You can't delete a rule if it's still used in any WebACL objects or if it still includes any predicates, such as ByteMatchSet objects.

If you just want to remove a rule from a WebACL, use UpdateWebACL.

To permanently delete a RateBasedRule from AWS WAF, perform the following steps:

  1. Update the RateBasedRule to remove predicates, if any. For more information, see UpdateRateBasedRule.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteRateBasedRule request.

  3. Submit a DeleteRateBasedRule request.

" }, + "DeleteRegexMatchSet":{ + "name":"DeleteRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteRegexMatchSetRequest"}, + "output":{"shape":"DeleteRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFStaleDataException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a RegexMatchSet. You can't delete a RegexMatchSet if it's still used in any Rules or if it still includes any RegexMatchTuples objects (any filters).

If you just want to remove a RegexMatchSet from a Rule, use UpdateRule.

To permanently delete a RegexMatchSet, perform the following steps:

  1. Update the RegexMatchSet to remove filters, if any. For more information, see UpdateRegexMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of a DeleteRegexMatchSet request.

  3. Submit a DeleteRegexMatchSet request.

" + }, + "DeleteRegexPatternSet":{ + "name":"DeleteRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteRegexPatternSetRequest"}, + "output":{"shape":"DeleteRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFStaleDataException"}, + {"shape":"WAFNonEmptyEntityException"} + ], + "documentation":"

Permanently deletes a RegexPatternSet. You can't delete a RegexPatternSet if it's still used in any RegexMatchSet or if the RegexPatternSet is not empty.

" + }, "DeleteRule":{ "name":"DeleteRule", "http":{ @@ -340,6 +444,21 @@ ], "documentation":"

Returns the status of a ChangeToken that you got by calling GetChangeToken. ChangeTokenStatus is one of the following values:

" }, + "GetGeoMatchSet":{ + "name":"GetGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetGeoMatchSetRequest"}, + "output":{"shape":"GetGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the GeoMatchSet that is specified by GeoMatchSetId.

" + }, "GetIPSet":{ "name":"GetIPSet", "http":{ @@ -386,6 +505,36 @@ ], "documentation":"

Returns an array of IP addresses currently being blocked by the RateBasedRule that is specified by the RuleId. The maximum number of managed keys that will be blocked is 10,000. If more than 10,000 addresses exceed the rate limit, the 10,000 addresses with the highest rates will be blocked.

" }, + "GetRegexMatchSet":{ + "name":"GetRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetRegexMatchSetRequest"}, + "output":{"shape":"GetRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the RegexMatchSet specified by RegexMatchSetId.

" + }, + "GetRegexPatternSet":{ + "name":"GetRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetRegexPatternSetRequest"}, + "output":{"shape":"GetRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFNonexistentItemException"} + ], + "documentation":"

Returns the RegexPatternSet specified by RegexPatternSetId.

" + }, "GetRule":{ "name":"GetRule", "http":{ @@ -489,6 +638,20 @@ ], "documentation":"

Returns an array of ByteMatchSetSummary objects.

" }, + "ListGeoMatchSets":{ + "name":"ListGeoMatchSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListGeoMatchSetsRequest"}, + "output":{"shape":"ListGeoMatchSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of GeoMatchSetSummary objects in the response.

" + }, "ListIPSets":{ "name":"ListIPSets", "http":{ @@ -517,6 +680,34 @@ ], "documentation":"

Returns an array of RuleSummary objects.

" }, + "ListRegexMatchSets":{ + "name":"ListRegexMatchSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListRegexMatchSetsRequest"}, + "output":{"shape":"ListRegexMatchSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of RegexMatchSetSummary objects.

" + }, + "ListRegexPatternSets":{ + "name":"ListRegexPatternSets", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListRegexPatternSetsRequest"}, + "output":{"shape":"ListRegexPatternSetsResponse"}, + "errors":[ + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Returns an array of RegexPatternSetSummary objects.

" + }, "ListRules":{ "name":"ListRules", "http":{ @@ -607,6 +798,27 @@ ], "documentation":"

Inserts or deletes ByteMatchTuple objects (filters) in a ByteMatchSet. For each ByteMatchTuple object, you specify the following values:

For example, you can add a ByteMatchSetUpdate object that matches web requests in which User-Agent headers contain the string BadBot. You can then configure AWS WAF to block those requests.

To create and configure a ByteMatchSet, perform the following steps:

  1. Create a ByteMatchSet. For more information, see CreateByteMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateByteMatchSet request.

  3. Submit an UpdateByteMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the value that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" }, + "UpdateGeoMatchSet":{ + "name":"UpdateGeoMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateGeoMatchSetRequest"}, + "output":{"shape":"UpdateGeoMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidParameterException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFReferencedItemException"}, + {"shape":"WAFLimitsExceededException"} + ], + "documentation":"

Inserts or deletes GeoMatchConstraint objects in an GeoMatchSet. For each GeoMatchConstraint object, you specify the following values:

To create and configure an GeoMatchSet, perform the following steps:

  1. Submit a CreateGeoMatchSet request.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateGeoMatchSet request.

  3. Submit an UpdateGeoMatchSet request to specify the country that you want AWS WAF to watch for.

When you update an GeoMatchSet, you specify the country that you want to add and/or the country that you want to delete. If you want to change a country, you delete the existing country and add the new one.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "UpdateIPSet":{ "name":"UpdateIPSet", "http":{ @@ -649,6 +861,45 @@ ], "documentation":"

Inserts or deletes Predicate objects in a rule and updates the RateLimit in the rule.

Each Predicate object identifies a predicate, such as a ByteMatchSet or an IPSet, that specifies the web requests that you want to block or count. The RateLimit specifies the number of requests every five minutes that triggers the rule.

If you add more than one predicate to a RateBasedRule, a request must match all the predicates and exceed the RateLimit to be counted or blocked. For example, suppose you add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

You then add the RateBasedRule to a WebACL and specify that you want to block requests that satisfy the rule. For a request to be blocked, it must come from the IP address 192.0.2.44 and the User-Agent header in the request must contain the value BadBot. Further, requests that match these two conditions much be received at a rate of more than 15,000 every five minutes. If the rate drops below this limit, AWS WAF no longer blocks the requests.

As a second example, suppose you want to limit requests to a particular page on your site. To do this, you could add the following to a RateBasedRule:

Further, you specify a RateLimit of 15,000.

By adding this RateBasedRule to a WebACL, you could limit requests to your login page without affecting the rest of your site.

" }, + "UpdateRegexMatchSet":{ + "name":"UpdateRegexMatchSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateRegexMatchSetRequest"}, + "output":{"shape":"UpdateRegexMatchSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFDisallowedNameException"}, + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFNonexistentItemException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidAccountException"} + ], + "documentation":"

Inserts or deletes RegexMatchSetUpdate objects (filters) in a RegexMatchSet. For each RegexMatchSetUpdate object, you specify the following values:

For example, you can create a RegexPatternSet that matches any requests with User-Agent headers that contain the string B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

To create and configure a RegexMatchSet, perform the following steps:

  1. Create a RegexMatchSet. For more information, see CreateRegexMatchSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexMatchSet request.

  3. Submit an UpdateRegexMatchSet request to specify the part of the request that you want AWS WAF to inspect (for example, the header or the URI) and the identifier of the RegexPatternSet that contain the regular expression patters you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, + "UpdateRegexPatternSet":{ + "name":"UpdateRegexPatternSet", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateRegexPatternSetRequest"}, + "output":{"shape":"UpdateRegexPatternSetResponse"}, + "errors":[ + {"shape":"WAFStaleDataException"}, + {"shape":"WAFInternalErrorException"}, + {"shape":"WAFLimitsExceededException"}, + {"shape":"WAFNonexistentContainerException"}, + {"shape":"WAFInvalidOperationException"}, + {"shape":"WAFInvalidAccountException"}, + {"shape":"WAFInvalidRegexPatternException"} + ], + "documentation":"

Inserts or deletes RegexMatchSetUpdate objects (filters) in a RegexPatternSet. For each RegexPatternSet object, you specify the following values:

For example, you can create a RegexPatternString such as B[a@]dB[o0]t. AWS WAF will match this RegexPatternString to:

To create and configure a RegexPatternSet, perform the following steps:

  1. Create a RegexPatternSet. For more information, see CreateRegexPatternSet.

  2. Use GetChangeToken to get the change token that you provide in the ChangeToken parameter of an UpdateRegexPatternSet request.

  3. Submit an UpdateRegexPatternSet request to specify the regular expression pattern that you want AWS WAF to watch for.

For more information about how to use the AWS WAF API to allow or block HTTP requests, see the AWS WAF Developer Guide.

" + }, "UpdateRule":{ "name":"UpdateRule", "http":{ @@ -850,7 +1101,8 @@ }, "ByteMatchSetUpdates":{ "type":"list", - "member":{"shape":"ByteMatchSetUpdate"} + "member":{"shape":"ByteMatchSetUpdate"}, + "min":1 }, "ByteMatchTargetString":{"type":"blob"}, "ByteMatchTuple":{ @@ -946,6 +1198,36 @@ } } }, + "CreateGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change Name after you create the GeoMatchSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "GeoMatchSet":{ + "shape":"GeoMatchSet", + "documentation":"

The GeoMatchSet returned in the CreateGeoMatchSet response. The GeoMatchSet contains no GeoMatchConstraints.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "CreateIPSetRequest":{ "type":"structure", "required":[ @@ -1021,6 +1303,66 @@ } } }, + "CreateRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "RegexMatchSet":{ + "shape":"RegexMatchSet", + "documentation":"

A RegexMatchSet that contains no RegexMatchTuple objects.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "CreateRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "Name", + "ChangeToken" + ], + "members":{ + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "CreateRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "RegexPatternSet":{ + "shape":"RegexPatternSet", + "documentation":"

A RegexPatternSet that contains no objects.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the CreateRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "CreateRuleRequest":{ "type":"structure", "required":[ @@ -1216,6 +1558,32 @@ } } }, + "DeleteGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "ChangeToken" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetID of the GeoMatchSet that you want to delete. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "DeleteIPSetRequest":{ "type":"structure", "required":[ @@ -1268,6 +1636,58 @@ } } }, + "DeleteRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "ChangeToken" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to delete. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "DeleteRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "ChangeToken" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to delete. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "DeleteRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the DeleteRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "DeleteRuleRequest":{ "type":"structure", "required":[ @@ -1417,6 +1837,353 @@ }, "documentation":"

Specifies where in a web request to look for TargetString.

" }, + "GeoMatchConstraint":{ + "type":"structure", + "required":[ + "Type", + "Value" + ], + "members":{ + "Type":{ + "shape":"GeoMatchConstraintType", + "documentation":"

The type of geographical area you want AWS WAF to search for. Currently Country is the only valid value.

" + }, + "Value":{ + "shape":"GeoMatchConstraintValue", + "documentation":"

The country that you want AWS WAF to search for.

" + } + }, + "documentation":"

The country from which web requests originate that you want AWS WAF to search for.

" + }, + "GeoMatchConstraintType":{ + "type":"string", + "enum":["Country"] + }, + "GeoMatchConstraintValue":{ + "type":"string", + "enum":[ + "AF", + "AX", + "AL", + "DZ", + "AS", + "AD", + "AO", + "AI", + "AQ", + "AG", + "AR", + "AM", + "AW", + "AU", + "AT", + "AZ", + "BS", + "BH", + "BD", + "BB", + "BY", + "BE", + "BZ", + "BJ", + "BM", + "BT", + "BO", + "BQ", + "BA", + "BW", + "BV", + "BR", + "IO", + "BN", + "BG", + "BF", + "BI", + "KH", + "CM", + "CA", + "CV", + "KY", + "CF", + "TD", + "CL", + "CN", + "CX", + "CC", + "CO", + "KM", + "CG", + "CD", + "CK", + "CR", + "CI", + "HR", + "CU", + "CW", + "CY", + "CZ", + "DK", + "DJ", + "DM", + "DO", + "EC", + "EG", + "SV", + "GQ", + "ER", + "EE", + "ET", + "FK", + "FO", + "FJ", + "FI", + "FR", + "GF", + "PF", + "TF", + "GA", + "GM", + "GE", + "DE", + "GH", + "GI", + "GR", + "GL", + "GD", + "GP", + "GU", + "GT", + "GG", + "GN", + "GW", + "GY", + "HT", + "HM", + "VA", + "HN", + "HK", + "HU", + "IS", + "IN", + "ID", + "IR", + "IQ", + "IE", + "IM", + "IL", + "IT", + "JM", + "JP", + "JE", + "JO", + "KZ", + "KE", + "KI", + "KP", + "KR", + "KW", + "KG", + "LA", + "LV", + "LB", + "LS", + "LR", + "LY", + "LI", + "LT", + "LU", + "MO", + "MK", + "MG", + "MW", + "MY", + "MV", + "ML", + "MT", + "MH", + "MQ", + "MR", + "MU", + "YT", + "MX", + "FM", + "MD", + "MC", + "MN", + "ME", + "MS", + "MA", + "MZ", + "MM", + "NA", + "NR", + "NP", + "NL", + "NC", + "NZ", + "NI", + "NE", + "NG", + "NU", + "NF", + "MP", + "NO", + "OM", + "PK", + "PW", + "PS", + "PA", + "PG", + "PY", + "PE", + "PH", + "PN", + "PL", + "PT", + "PR", + "QA", + "RE", + "RO", + "RU", + "RW", + "BL", + "SH", + "KN", + "LC", + "MF", + "PM", + "VC", + "WS", + "SM", + "ST", + "SA", + "SN", + "RS", + "SC", + "SL", + "SG", + "SX", + "SK", + "SI", + "SB", + "SO", + "ZA", + "GS", + "SS", + "ES", + "LK", + "SD", + "SR", + "SJ", + "SZ", + "SE", + "CH", + "SY", + "TW", + "TJ", + "TZ", + "TH", + "TL", + "TG", + "TK", + "TO", + "TT", + "TN", + "TR", + "TM", + "TC", + "TV", + "UG", + "UA", + "AE", + "GB", + "US", + "UM", + "UY", + "UZ", + "VU", + "VE", + "VN", + "VG", + "VI", + "WF", + "EH", + "YE", + "ZM", + "ZW" + ] + }, + "GeoMatchConstraints":{ + "type":"list", + "member":{"shape":"GeoMatchConstraint"} + }, + "GeoMatchSet":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "GeoMatchConstraints" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId for an GeoMatchSet. You use GeoMatchSetId to get information about a GeoMatchSet (see GeoMatchSet), update a GeoMatchSet (see UpdateGeoMatchSet), insert a GeoMatchSet into a Rule or delete one from a Rule (see UpdateRule), and delete a GeoMatchSet from AWS WAF (see DeleteGeoMatchSet).

GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change the name of an GeoMatchSet after you create it.

" + }, + "GeoMatchConstraints":{ + "shape":"GeoMatchConstraints", + "documentation":"

An array of GeoMatchConstraint objects, which contain the country that you want AWS WAF to search for.

" + } + }, + "documentation":"

Contains one or more countries that AWS WAF will search for.

" + }, + "GeoMatchSetSummaries":{ + "type":"list", + "member":{"shape":"GeoMatchSetSummary"} + }, + "GeoMatchSetSummary":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "Name" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId for an GeoMatchSet. You can use GeoMatchSetId in a GetGeoMatchSet request to get detailed information about an GeoMatchSet.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the GeoMatchSet. You can't change the name of an GeoMatchSet after you create it.

" + } + }, + "documentation":"

Contains the identifier and the name of the GeoMatchSet.

" + }, + "GeoMatchSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "GeoMatchConstraint" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a country with UpdateGeoMatchSet.

" + }, + "GeoMatchConstraint":{ + "shape":"GeoMatchConstraint", + "documentation":"

The country from which web requests originate that you want AWS WAF to search for.

" + } + }, + "documentation":"

Specifies the type of update to perform to an GeoMatchSet with UpdateGeoMatchSet.

" + }, + "GeoMatchSetUpdates":{ + "type":"list", + "member":{"shape":"GeoMatchSetUpdate"}, + "min":1 + }, "GetByteMatchSetRequest":{ "type":"structure", "required":["ByteMatchSetId"], @@ -1469,6 +2236,25 @@ } } }, + "GetGeoMatchSetRequest":{ + "type":"structure", + "required":["GeoMatchSetId"], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId of the GeoMatchSet that you want to get. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + } + } + }, + "GetGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "GeoMatchSet":{ + "shape":"GeoMatchSet", + "documentation":"

Information about the GeoMatchSet that you specified in the GetGeoMatchSet request. This includes the Type, which for a GeoMatchContraint is always Country, as well as the Value, which is the identifier for a specific country.

" + } + } + }, "GetIPSetRequest":{ "type":"structure", "required":["IPSetId"], @@ -1534,6 +2320,44 @@ } } }, + "GetRegexMatchSetRequest":{ + "type":"structure", + "required":["RegexMatchSetId"], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to get. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + } + } + }, + "GetRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "RegexMatchSet":{ + "shape":"RegexMatchSet", + "documentation":"

Information about the RegexMatchSet that you specified in the GetRegexMatchSet request. For more information, see RegexMatchTuple.

" + } + } + }, + "GetRegexPatternSetRequest":{ + "type":"structure", + "required":["RegexPatternSetId"], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to get. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + } + } + }, + "GetRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "RegexPatternSet":{ + "shape":"RegexPatternSet", + "documentation":"

Information about the RegexPatternSet that you specified in the GetRegexPatternSet request, including the identifier of the pattern set and the regular expression patterns you want AWS WAF to search for.

" + } + } + }, "GetRuleRequest":{ "type":"structure", "required":["RuleId"], @@ -1828,7 +2652,8 @@ }, "IPSetUpdates":{ "type":"list", - "member":{"shape":"IPSetUpdate"} + "member":{"shape":"IPSetUpdate"}, + "min":1 }, "IPString":{"type":"string"}, "ListByteMatchSetsRequest":{ @@ -1857,12 +2682,38 @@ } } }, + "ListGeoMatchSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more GeoMatchSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of GeoMatchSet objects. For the second and subsequent ListGeoMatchSets requests, specify the value of NextMarker from the previous response to get information about another batch of GeoMatchSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of GeoMatchSet objects that you want AWS WAF to return for this request. If you have more GeoMatchSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of GeoMatchSet objects.

" + } + } + }, + "ListGeoMatchSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more GeoMatchSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more GeoMatchSet objects, submit another ListGeoMatchSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "GeoMatchSets":{ + "shape":"GeoMatchSetSummaries", + "documentation":"

An array of GeoMatchSetSummary objects.

" + } + } + }, "ListIPSetsRequest":{ "type":"structure", "members":{ "NextMarker":{ "shape":"NextMarker", - "documentation":"

If you specify a value for Limit and you have more IPSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of IPSets. For the second and subsequent ListIPSets requests, specify the value of NextMarker from the previous response to get information about another batch of ByteMatchSets.

" + "documentation":"

If you specify a value for Limit and you have more IPSets than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of IPSets. For the second and subsequent ListIPSets requests, specify the value of NextMarker from the previous response to get information about another batch of IPSets.

" }, "Limit":{ "shape":"PaginationLimit", @@ -1909,6 +2760,58 @@ } } }, + "ListRegexMatchSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more RegexMatchSet objects than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of ByteMatchSets. For the second and subsequent ListRegexMatchSets requests, specify the value of NextMarker from the previous response to get information about another batch of RegexMatchSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of RegexMatchSet objects that you want AWS WAF to return for this request. If you have more RegexMatchSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of RegexMatchSet objects.

" + } + } + }, + "ListRegexMatchSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more RegexMatchSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more RegexMatchSet objects, submit another ListRegexMatchSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "RegexMatchSets":{ + "shape":"RegexMatchSetSummaries", + "documentation":"

An array of RegexMatchSetSummary objects.

" + } + } + }, + "ListRegexPatternSetsRequest":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you specify a value for Limit and you have more RegexPatternSet objects than the value of Limit, AWS WAF returns a NextMarker value in the response that allows you to list another group of RegexPatternSet objects. For the second and subsequent ListRegexPatternSets requests, specify the value of NextMarker from the previous response to get information about another batch of RegexPatternSet objects.

" + }, + "Limit":{ + "shape":"PaginationLimit", + "documentation":"

Specifies the number of RegexPatternSet objects that you want AWS WAF to return for this request. If you have more RegexPatternSet objects than the number you specify for Limit, the response includes a NextMarker value that you can use to get another batch of RegexPatternSet objects.

" + } + } + }, + "ListRegexPatternSetsResponse":{ + "type":"structure", + "members":{ + "NextMarker":{ + "shape":"NextMarker", + "documentation":"

If you have more RegexPatternSet objects than the number that you specified for Limit in the request, the response includes a NextMarker value. To list more RegexPatternSet objects, submit another ListRegexPatternSets request, and specify the NextMarker value from the response in the NextMarker value in the next request.

" + }, + "RegexPatternSets":{ + "shape":"RegexPatternSetSummaries", + "documentation":"

An array of RegexPatternSetSummary objects.

" + } + } + }, "ListRulesRequest":{ "type":"structure", "members":{ @@ -2082,6 +2985,8 @@ "BYTE_MATCH_TEXT_TRANSFORMATION", "BYTE_MATCH_POSITIONAL_CONSTRAINT", "SIZE_CONSTRAINT_COMPARISON_OPERATOR", + "GEO_MATCH_LOCATION_TYPE", + "GEO_MATCH_LOCATION_VALUE", "RATE_KEY", "RULE_TYPE", "NEXT_MARKER" @@ -2119,7 +3024,7 @@ "members":{ "Negated":{ "shape":"Negated", - "documentation":"

Set Negated to False if you want AWS WAF to allow, block, or count requests based on the settings in the specified ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow or block requests based on that IP address.

Set Negated to True if you want AWS WAF to allow or block a request based on the negation of the settings in the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44.

" + "documentation":"

Set Negated to False if you want AWS WAF to allow, block, or count requests based on the settings in the specified ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow or block requests based on that IP address.

Set Negated to True if you want AWS WAF to allow or block a request based on the negation of the settings in the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, or SizeConstraintSet. For example, if an IPSet includes the IP address 192.0.2.44, AWS WAF will allow, block, or count requests based on all IP addresses except 192.0.2.44.

" }, "Type":{ "shape":"PredicateType", @@ -2130,7 +3035,7 @@ "documentation":"

A unique identifier for a predicate in a Rule, such as ByteMatchSetId or IPSetId. The ID is returned by the corresponding Create or List command.

" } }, - "documentation":"

Specifies the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, and SizeConstraintSet objects that you want to add to a Rule and, for each object, indicates whether you want to negate the settings, for example, requests that do NOT originate from the IP address 192.0.2.44.

" + "documentation":"

Specifies the ByteMatchSet, IPSet, SqlInjectionMatchSet, XssMatchSet, RegexMatchSet, GeoMatchSet, and SizeConstraintSet objects that you want to add to a Rule and, for each object, indicates whether you want to negate the settings, for example, requests that do NOT originate from the IP address 192.0.2.44.

" }, "PredicateType":{ "type":"string", @@ -2138,8 +3043,10 @@ "IPMatch", "ByteMatch", "SqlInjectionMatch", + "GeoMatch", "SizeConstraint", - "XssMatch" + "XssMatch", + "RegexMatch" ] }, "Predicates":{ @@ -2190,6 +3097,172 @@ "type":"long", "min":2000 }, + "RegexMatchSet":{ + "type":"structure", + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId for a RegexMatchSet. You use RegexMatchSetId to get information about a RegexMatchSet (see GetRegexMatchSet), update a RegexMatchSet (see UpdateRegexMatchSet), insert a RegexMatchSet into a Rule or delete one from a Rule (see UpdateRule), and delete a RegexMatchSet from AWS WAF (see DeleteRegexMatchSet).

RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + }, + "RegexMatchTuples":{ + "shape":"RegexMatchTuples", + "documentation":"

Contains an array of RegexMatchTuple objects. Each RegexMatchTuple object contains:

" + } + }, + "documentation":"

In a GetRegexMatchSet request, RegexMatchSet is a complex type that contains the RegexMatchSetId and Name of a RegexMatchSet, and the values that you specified when you updated the RegexMatchSet.

The values are contained in a RegexMatchTuple object, which specify the parts of web requests that you want AWS WAF to inspect and the values that you want AWS WAF to search for. If a RegexMatchSet contains more than one RegexMatchTuple object, a request needs to match the settings in only one ByteMatchTuple to be considered a match.

" + }, + "RegexMatchSetSummaries":{ + "type":"list", + "member":{"shape":"RegexMatchSetSummary"} + }, + "RegexMatchSetSummary":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "Name" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId for a RegexMatchSet. You use RegexMatchSetId to get information about a RegexMatchSet, update a RegexMatchSet, remove a RegexMatchSet from a Rule, and delete a RegexMatchSet from AWS WAF.

RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexMatchSet. You can't change Name after you create a RegexMatchSet.

" + } + }, + "documentation":"

Returned by ListRegexMatchSets. Each RegexMatchSetSummary object includes the Name and RegexMatchSetId for one RegexMatchSet.

" + }, + "RegexMatchSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "RegexMatchTuple" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a RegexMatchTuple.

" + }, + "RegexMatchTuple":{ + "shape":"RegexMatchTuple", + "documentation":"

Information about the part of a web request that you want AWS WAF to inspect and the identifier of the regular expression (regex) pattern that you want AWS WAF to search for. If you specify DELETE for the value of Action, the RegexMatchTuple values must exactly match the values in the RegexMatchTuple that you want to delete from the RegexMatchSet.

" + } + }, + "documentation":"

In an UpdateRegexMatchSet request, RegexMatchSetUpdate specifies whether to insert or delete a RegexMatchTuple and includes the settings for the RegexMatchTuple.

" + }, + "RegexMatchSetUpdates":{ + "type":"list", + "member":{"shape":"RegexMatchSetUpdate"}, + "min":1 + }, + "RegexMatchTuple":{ + "type":"structure", + "required":[ + "FieldToMatch", + "TextTransformation", + "RegexPatternSetId" + ], + "members":{ + "FieldToMatch":{ + "shape":"FieldToMatch", + "documentation":"

Specifies where in a web request to look for the RegexPatternSet.

" + }, + "TextTransformation":{ + "shape":"TextTransformation", + "documentation":"

Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass AWS WAF. If you specify a transformation, AWS WAF performs the transformation on RegexPatternSet before inspecting a request for a match.

CMD_LINE

When you're concerned that attackers are injecting an operating system commandline command and using unusual formatting to disguise some or all of the command, use this option to perform the following transformations:

COMPRESS_WHITE_SPACE

Use this option to replace the following characters with a space character (decimal 32):

COMPRESS_WHITE_SPACE also replaces multiple spaces with one space.

HTML_ENTITY_DECODE

Use this option to replace HTML-encoded characters with unencoded characters. HTML_ENTITY_DECODE performs the following operations:

LOWERCASE

Use this option to convert uppercase letters (A-Z) to lowercase (a-z).

URL_DECODE

Use this option to decode a URL-encoded value.

NONE

Specify NONE if you don't want to perform any text transformations.

" + }, + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId for a RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet (see GetRegexPatternSet), update a RegexPatternSet (see UpdateRegexPatternSet), insert a RegexPatternSet into a RegexMatchSet or delete one from a RegexMatchSet (see UpdateRegexMatchSet), and delete an RegexPatternSet from AWS WAF (see DeleteRegexPatternSet).

RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + } + }, + "documentation":"

The regular expression pattern that you want AWS WAF to search for in web requests, the location in requests that you want AWS WAF to search, and other settings. Each RegexMatchTuple object contains:

" + }, + "RegexMatchTuples":{ + "type":"list", + "member":{"shape":"RegexMatchTuple"} + }, + "RegexPatternSet":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "RegexPatternStrings" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The identifier for the RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet, update a RegexPatternSet, remove a RegexPatternSet from a RegexMatchSet, and delete a RegexPatternSet from AWS WAF.

RegexMatchSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + }, + "RegexPatternStrings":{ + "shape":"RegexPatternStrings", + "documentation":"

Specifies the regular expression (regex) patterns that you want AWS WAF to search for, such as B[a@]dB[o0]t.

" + } + }, + "documentation":"

The RegexPatternSet specifies the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t. You can then configure AWS WAF to reject those requests.

" + }, + "RegexPatternSetSummaries":{ + "type":"list", + "member":{"shape":"RegexPatternSetSummary"} + }, + "RegexPatternSetSummary":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "Name" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId for a RegexPatternSet. You use RegexPatternSetId to get information about a RegexPatternSet, update a RegexPatternSet, remove a RegexPatternSet from a RegexMatchSet, and delete a RegexPatternSet from AWS WAF.

RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Name":{ + "shape":"ResourceName", + "documentation":"

A friendly name or description of the RegexPatternSet. You can't change Name after you create a RegexPatternSet.

" + } + }, + "documentation":"

Returned by ListRegexPatternSets. Each RegexPatternSetSummary object includes the Name and RegexPatternSetId for one RegexPatternSet.

" + }, + "RegexPatternSetUpdate":{ + "type":"structure", + "required":[ + "Action", + "RegexPatternString" + ], + "members":{ + "Action":{ + "shape":"ChangeAction", + "documentation":"

Specifies whether to insert or delete a RegexPatternString.

" + }, + "RegexPatternString":{ + "shape":"RegexPatternString", + "documentation":"

Specifies the regular expression (regex) pattern that you want AWS WAF to search for, such as B[a@]dB[o0]t.

" + } + }, + "documentation":"

In an UpdateRegexPatternSet request, RegexPatternSetUpdate specifies whether to insert or delete a RegexPatternString and includes the settings for the RegexPatternString.

" + }, + "RegexPatternSetUpdates":{ + "type":"list", + "member":{"shape":"RegexPatternSetUpdate"}, + "min":1 + }, + "RegexPatternString":{ + "type":"string", + "min":1 + }, + "RegexPatternStrings":{ + "type":"list", + "member":{"shape":"RegexPatternString"}, + "max":10 + }, "ResourceId":{ "type":"string", "max":128, @@ -2402,7 +3475,8 @@ }, "SizeConstraintSetUpdates":{ "type":"list", - "member":{"shape":"SizeConstraintSetUpdate"} + "member":{"shape":"SizeConstraintSetUpdate"}, + "min":1 }, "SizeConstraints":{ "type":"list", @@ -2472,7 +3546,8 @@ }, "SqlInjectionMatchSetUpdates":{ "type":"list", - "member":{"shape":"SqlInjectionMatchSetUpdate"} + "member":{"shape":"SqlInjectionMatchSetUpdate"}, + "min":1 }, "SqlInjectionMatchTuple":{ "type":"structure", @@ -2558,6 +3633,37 @@ } } }, + "UpdateGeoMatchSetRequest":{ + "type":"structure", + "required":[ + "GeoMatchSetId", + "ChangeToken", + "Updates" + ], + "members":{ + "GeoMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The GeoMatchSetId of the GeoMatchSet that you want to update. GeoMatchSetId is returned by CreateGeoMatchSet and by ListGeoMatchSets.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + }, + "Updates":{ + "shape":"GeoMatchSetUpdates", + "documentation":"

An array of GeoMatchSetUpdate objects that you want to insert into or delete from an GeoMatchSet. For more information, see the applicable data types:

" + } + } + }, + "UpdateGeoMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateGeoMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "UpdateIPSetRequest":{ "type":"structure", "required":[ @@ -2625,6 +3731,68 @@ } } }, + "UpdateRegexMatchSetRequest":{ + "type":"structure", + "required":[ + "RegexMatchSetId", + "Updates", + "ChangeToken" + ], + "members":{ + "RegexMatchSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexMatchSetId of the RegexMatchSet that you want to update. RegexMatchSetId is returned by CreateRegexMatchSet and by ListRegexMatchSets.

" + }, + "Updates":{ + "shape":"RegexMatchSetUpdates", + "documentation":"

An array of RegexMatchSetUpdate objects that you want to insert into or delete from a RegexMatchSet. For more information, see RegexMatchTuple.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "UpdateRegexMatchSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateRegexMatchSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, + "UpdateRegexPatternSetRequest":{ + "type":"structure", + "required":[ + "RegexPatternSetId", + "Updates", + "ChangeToken" + ], + "members":{ + "RegexPatternSetId":{ + "shape":"ResourceId", + "documentation":"

The RegexPatternSetId of the RegexPatternSet that you want to update. RegexPatternSetId is returned by CreateRegexPatternSet and by ListRegexPatternSets.

" + }, + "Updates":{ + "shape":"RegexPatternSetUpdates", + "documentation":"

An array of RegexPatternSetUpdate objects that you want to insert into or delete from a RegexPatternSet.

" + }, + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The value returned by the most recent call to GetChangeToken.

" + } + } + }, + "UpdateRegexPatternSetResponse":{ + "type":"structure", + "members":{ + "ChangeToken":{ + "shape":"ChangeToken", + "documentation":"

The ChangeToken that you used to submit the UpdateRegexPatternSet request. You can also use this value to query the status of the request. For more information, see GetChangeTokenStatus.

" + } + } + }, "UpdateRuleRequest":{ "type":"structure", "required":[ @@ -2826,7 +3994,15 @@ "parameter":{"shape":"ParameterExceptionParameter"}, "reason":{"shape":"ParameterExceptionReason"} }, - "documentation":"

The operation failed because AWS WAF didn't recognize a parameter in the request. For example:

", + "documentation":"

The operation failed because AWS WAF didn't recognize a parameter in the request. For example:

", + "exception":true + }, + "WAFInvalidRegexPatternException":{ + "type":"structure", + "members":{ + "message":{"shape":"errorMessage"} + }, + "documentation":"

The regular expression (regex) you specified in RegexPatternString is invalid.

", "exception":true }, "WAFLimitsExceededException":{ @@ -3042,7 +4218,8 @@ }, "XssMatchSetUpdates":{ "type":"list", - "member":{"shape":"XssMatchSetUpdate"} + "member":{"shape":"XssMatchSetUpdate"}, + "min":1 }, "XssMatchTuple":{ "type":"structure", diff --git a/services/workdocs/src/main/resources/codegen-resources/service-2.json b/services/workdocs/src/main/resources/codegen-resources/service-2.json index 7ae84dbc2197..1775b8d77397 100644 --- a/services/workdocs/src/main/resources/codegen-resources/service-2.json +++ b/services/workdocs/src/main/resources/codegen-resources/service-2.json @@ -414,6 +414,23 @@ ], "documentation":"

Describes the contents of the specified folder, including its documents and subfolders.

By default, Amazon WorkDocs returns the first 100 active document and folder metadata items. If there are more results, the response includes a marker that you can use to request the next set of results. You can also request initialized documents.

" }, + "DescribeGroups":{ + "name":"DescribeGroups", + "http":{ + "method":"GET", + "requestUri":"/api/v1/groups", + "responseCode":200 + }, + "input":{"shape":"DescribeGroupsRequest"}, + "output":{"shape":"DescribeGroupsResponse"}, + "errors":[ + {"shape":"UnauthorizedOperationException"}, + {"shape":"UnauthorizedResourceAccessException"}, + {"shape":"FailedDependencyException"}, + {"shape":"ServiceUnavailableException"} + ], + "documentation":"

Describes the groups specified by query.

" + }, "DescribeNotificationSubscriptions":{ "name":"DescribeNotificationSubscriptions", "http":{ @@ -463,7 +480,7 @@ {"shape":"FailedDependencyException"}, {"shape":"ServiceUnavailableException"} ], - "documentation":"

Describes the current user's special folders; the RootFolder and the RecyleBin. RootFolder is the root of user's files and folders and RecyleBin is the root of recycled items. This is not a valid action for SigV4 (administrative API) clients.

" + "documentation":"

Describes the current user's special folders; the RootFolder and the RecycleBin. RootFolder is the root of user's files and folders and RecycleBin is the root of recycled items. This is not a valid action for SigV4 (administrative API) clients.

" }, "DescribeUsers":{ "name":"DescribeUsers", @@ -516,7 +533,8 @@ {"shape":"UnauthorizedResourceAccessException"}, {"shape":"InvalidArgumentException"}, {"shape":"FailedDependencyException"}, - {"shape":"ServiceUnavailableException"} + {"shape":"ServiceUnavailableException"}, + {"shape":"InvalidPasswordException"} ], "documentation":"

Retrieves details of a document.

" }, @@ -553,7 +571,8 @@ {"shape":"UnauthorizedResourceAccessException"}, {"shape":"FailedDependencyException"}, {"shape":"ServiceUnavailableException"}, - {"shape":"ProhibitedStateException"} + {"shape":"ProhibitedStateException"}, + {"shape":"InvalidPasswordException"} ], "documentation":"

Retrieves version metadata for the specified document.

" }, @@ -729,7 +748,8 @@ {"shape":"IllegalUserStateException"}, {"shape":"FailedDependencyException"}, {"shape":"ServiceUnavailableException"}, - {"shape":"DeactivatingLastSystemUserException"} + {"shape":"DeactivatingLastSystemUserException"}, + {"shape":"InvalidArgumentException"} ], "documentation":"

Updates the specified attributes of the specified user, and grants or revokes administrative privileges to the Amazon WorkDocs site.

" } @@ -744,7 +764,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -774,7 +794,7 @@ }, "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" } @@ -872,7 +892,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -885,6 +905,10 @@ "Principals":{ "shape":"SharePrincipalList", "documentation":"

The users, groups, or organization being granted permission.

" + }, + "NotificationOptions":{ + "shape":"NotificationOptions", + "documentation":"

The notification options.

" } } }, @@ -903,6 +927,13 @@ "min":1, "sensitive":true }, + "BooleanEnumType":{ + "type":"string", + "enum":[ + "TRUE", + "FALSE" + ] + }, "BooleanType":{"type":"boolean"}, "Comment":{ "type":"structure", @@ -968,8 +999,14 @@ "shape":"User", "documentation":"

The user who made the comment.

" }, - "CreatedTimestamp":{"shape":"TimestampType"}, - "CommentStatus":{"shape":"CommentStatusType"}, + "CreatedTimestamp":{ + "shape":"TimestampType", + "documentation":"

The timestamp that the comment was created.

" + }, + "CommentStatus":{ + "shape":"CommentStatusType", + "documentation":"

The status of the comment.

" + }, "RecipientId":{ "shape":"IdType", "documentation":"

The ID of the user being replied to.

" @@ -1017,7 +1054,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1073,7 +1110,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1106,7 +1143,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1143,12 +1180,12 @@ "locationName":"ResourceId" }, "Labels":{ - "shape":"Labels", + "shape":"SharedLabels", "documentation":"

List of labels to add to the resource.

" }, "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" } @@ -1180,7 +1217,7 @@ }, "Protocol":{ "shape":"SubscriptionProtocolType", - "documentation":"

The protocol to use. The supported value is https, which delivers JSON-encoded messasges using HTTPS POST.

" + "documentation":"

The protocol to use. The supported value is https, which delivers JSON-encoded messages using HTTPS POST.

" }, "SubscriptionType":{ "shape":"SubscriptionType", @@ -1240,7 +1277,7 @@ }, "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" } @@ -1300,7 +1337,7 @@ }, "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" } @@ -1324,7 +1361,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1354,7 +1391,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1395,7 +1432,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1413,7 +1450,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1431,7 +1468,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1455,12 +1492,12 @@ }, "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, "Labels":{ - "shape":"Labels", + "shape":"SharedLabels", "documentation":"

List of labels to delete from the resource.

", "location":"querystring", "locationName":"labels" @@ -1505,7 +1542,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1522,19 +1559,19 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, "StartTime":{ "shape":"TimestampType", - "documentation":"

The timestamp that determines the starting time of the activities; the response includes the activities performed after the specified timestamp.

", + "documentation":"

The timestamp that determines the starting time of the activities. The response includes the activities performed after the specified timestamp.

", "location":"querystring", "locationName":"startTime" }, "EndTime":{ "shape":"TimestampType", - "documentation":"

The timestamp that determines the end time of the activities; the response includes the activities performed before the specified timestamp.

", + "documentation":"

The timestamp that determines the end time of the activities. The response includes the activities performed before the specified timestamp.

", "location":"querystring", "locationName":"endTime" }, @@ -1558,7 +1595,7 @@ }, "Marker":{ "shape":"MarkerType", - "documentation":"

The marker for the next set of results. (You received this marker from a previous call.)

", + "documentation":"

The marker for the next set of results.

", "location":"querystring", "locationName":"marker" } @@ -1586,7 +1623,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1635,7 +1672,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1690,7 +1727,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1755,6 +1792,55 @@ } } }, + "DescribeGroupsRequest":{ + "type":"structure", + "required":["SearchQuery"], + "members":{ + "AuthenticationToken":{ + "shape":"AuthenticationHeaderType", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", + "location":"header", + "locationName":"Authentication" + }, + "SearchQuery":{ + "shape":"SearchQueryType", + "documentation":"

A query to describe groups by group name.

", + "location":"querystring", + "locationName":"searchQuery" + }, + "OrganizationId":{ + "shape":"IdType", + "documentation":"

The ID of the organization.

", + "location":"querystring", + "locationName":"organizationId" + }, + "Marker":{ + "shape":"MarkerType", + "documentation":"

The marker for the next set of results. (You received this marker from a previous call.)

", + "location":"querystring", + "locationName":"marker" + }, + "Limit":{ + "shape":"PositiveIntegerType", + "documentation":"

The maximum number of items to return with this call.

", + "location":"querystring", + "locationName":"limit" + } + } + }, + "DescribeGroupsResponse":{ + "type":"structure", + "members":{ + "Groups":{ + "shape":"GroupMetadataList", + "documentation":"

The list of groups.

" + }, + "Marker":{ + "shape":"MarkerType", + "documentation":"

The marker to use when requesting the next set of results. If there are no additional results, the string is empty.

" + } + } + }, "DescribeNotificationSubscriptionsRequest":{ "type":"structure", "required":["OrganizationId"], @@ -1798,7 +1884,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1808,6 +1894,12 @@ "location":"uri", "locationName":"ResourceId" }, + "PrincipalId":{ + "shape":"IdType", + "documentation":"

The ID of the principal to filter permissions by.

", + "location":"querystring", + "locationName":"principalId" + }, "Limit":{ "shape":"LimitType", "documentation":"

The maximum number of items to return with this call.

", @@ -1841,7 +1933,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1877,7 +1969,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -1946,7 +2038,8 @@ }, "TotalNumberOfUsers":{ "shape":"SizeType", - "documentation":"

The total number of users included in the results.

" + "documentation":"

The total number of users included in the results.

", + "deprecated":true }, "Marker":{ "shape":"PageMarkerType", @@ -2000,7 +2093,7 @@ "documentation":"

The resource state.

" }, "Labels":{ - "shape":"Labels", + "shape":"SharedLabels", "documentation":"

List of labels on the document.

" } }, @@ -2077,19 +2170,19 @@ }, "CreatedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the document was first uploaded.

" + "documentation":"

The timestamp when the document was first uploaded.

" }, "ModifiedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the document was last uploaded.

" + "documentation":"

The timestamp when the document was last uploaded.

" }, "ContentCreatedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the content of the document was originally created.

" + "documentation":"

The timestamp when the content of the document was originally created.

" }, "ContentModifiedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the content of the document was modified.

" + "documentation":"

The timestamp when the content of the document was modified.

" }, "CreatorId":{ "shape":"IdType", @@ -2158,7 +2251,7 @@ "members":{ "Message":{"shape":"ErrorMessageType"} }, - "documentation":"

The AWS Directory Service cannot reach an on-premises instance. Or a dependency under the control of the organization is failing, such as a connected active directory.

", + "documentation":"

The AWS Directory Service cannot reach an on-premises instance. Or a dependency under the control of the organization is failing, such as a connected Active Directory.

", "error":{"httpStatusCode":424}, "exception":true }, @@ -2212,7 +2305,7 @@ "documentation":"

The unique identifier created from the subfolders and documents of the folder.

" }, "Labels":{ - "shape":"Labels", + "shape":"SharedLabels", "documentation":"

List of labels on the folder.

" }, "Size":{ @@ -2236,7 +2329,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" } @@ -2257,7 +2350,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2302,7 +2395,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2342,7 +2435,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2391,7 +2484,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2436,7 +2529,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2524,7 +2617,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2538,11 +2631,11 @@ }, "ContentCreatedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the content of the document was originally created.

" + "documentation":"

The timestamp when the content of the document was originally created.

" }, "ContentModifiedTimestamp":{ "shape":"TimestampType", - "documentation":"

The time stamp when the content of the document was modified.

" + "documentation":"

The timestamp when the content of the document was modified.

" }, "ContentType":{ "shape":"DocumentContentType", @@ -2576,7 +2669,7 @@ "members":{ "Message":{"shape":"ErrorMessageType"} }, - "documentation":"

The pagination marker and/or limit fields are not valid.

", + "documentation":"

The pagination marker or limit fields are not valid.

", "error":{"httpStatusCode":400}, "exception":true }, @@ -2589,16 +2682,14 @@ "error":{"httpStatusCode":405}, "exception":true }, - "Label":{ - "type":"string", - "max":32, - "min":1, - "pattern":"[a-zA-Z0-9._+-/=][a-zA-Z0-9 ._+-/=]*" - }, - "Labels":{ - "type":"list", - "member":{"shape":"Label"}, - "max":20 + "InvalidPasswordException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessageType"} + }, + "documentation":"

The password is invalid.

", + "error":{"httpStatusCode":401}, + "exception":true }, "LimitExceededException":{ "type":"structure", @@ -2642,6 +2733,20 @@ "min":0, "sensitive":true }, + "NotificationOptions":{ + "type":"structure", + "members":{ + "SendEmail":{ + "shape":"BooleanType", + "documentation":"

Boolean value to indicate an email notification should be sent to the receipients.

" + }, + "EmailMessage":{ + "shape":"MessageType", + "documentation":"

Text value to be included in the email body.

" + } + }, + "documentation":"

Set of options which defines notification preferences of given action.

" + }, "OrderType":{ "type":"string", "enum":[ @@ -2670,7 +2775,7 @@ "documentation":"

The list of user groups.

" } }, - "documentation":"

Describes the users and/or user groups.

" + "documentation":"

Describes the users or user groups.

" }, "PasswordType":{ "type":"string", @@ -2697,6 +2802,10 @@ "type":"list", "member":{"shape":"PermissionInfo"} }, + "PositiveIntegerType":{ + "type":"integer", + "min":1 + }, "PositiveSizeType":{ "type":"long", "min":0 @@ -2748,7 +2857,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2769,7 +2878,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -2821,7 +2930,7 @@ }, "OriginalName":{ "shape":"ResourceNameType", - "documentation":"

The original name of the resource prior to a rename operation.

" + "documentation":"

The original name of the resource before a rename operation.

" }, "Id":{ "shape":"ResourceIdType", @@ -2996,6 +3105,17 @@ "FAILURE" ] }, + "SharedLabel":{ + "type":"string", + "max":32, + "min":1, + "pattern":"[a-zA-Z0-9._+-/=][a-zA-Z0-9 ._+-/=]*" + }, + "SharedLabels":{ + "type":"list", + "member":{"shape":"SharedLabel"}, + "max":20 + }, "SignedHeaderMap":{ "type":"map", "key":{"shape":"HeaderNameType"}, @@ -3124,7 +3244,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -3144,7 +3264,7 @@ }, "ResourceState":{ "shape":"ResourceStateType", - "documentation":"

The resource state of the document. Note that only ACTIVE and RECYCLED are supported.

" + "documentation":"

The resource state of the document. Only ACTIVE and RECYCLED are supported.

" } } }, @@ -3157,7 +3277,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -3185,7 +3305,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -3205,7 +3325,7 @@ }, "ResourceState":{ "shape":"ResourceStateType", - "documentation":"

The resource state of the folder. Note that only ACTIVE and RECYCLED are accepted values from the API.

" + "documentation":"

The resource state of the folder. Only ACTIVE and RECYCLED are accepted values from the API.

" } } }, @@ -3215,7 +3335,7 @@ "members":{ "AuthenticationToken":{ "shape":"AuthenticationHeaderType", - "documentation":"

Amazon WorkDocs authentication token. This field should not be set when using administrative API actions, as in accessing the API using AWS credentials.

", + "documentation":"

Amazon WorkDocs authentication token. Do not set this field when using administrative API actions, as in accessing the API using AWS credentials.

", "location":"header", "locationName":"Authentication" }, @@ -3248,6 +3368,10 @@ "Locale":{ "shape":"LocaleType", "documentation":"

The locale of the user.

" + }, + "GrantPoweruserPrivileges":{ + "shape":"BooleanEnumType", + "documentation":"

Boolean value to determine whether the user is granted Poweruser privileges.

" } } }, @@ -3377,7 +3501,7 @@ }, "Username":{ "shape":"UsernameType", - "documentation":"

The username of the user.

" + "documentation":"

The name of the user.

" }, "GivenName":{ "shape":"UserAttributeValueType", @@ -3421,7 +3545,7 @@ "members":{ "StorageUtilizedInBytes":{ "shape":"SizeType", - "documentation":"

The amount of storage utilized, in bytes.

" + "documentation":"

The amount of storage used, in bytes.

" }, "StorageRule":{ "shape":"StorageRuleType", @@ -3434,7 +3558,10 @@ "type":"string", "enum":[ "USER", - "ADMIN" + "ADMIN", + "POWERUSER", + "MINIMALUSER", + "WORKSPACESUSER" ] }, "UsernameType":{ @@ -3444,5 +3571,5 @@ "pattern":"[\\w\\-+.]+(@[a-zA-Z0-9.\\-]+\\.[a-zA-Z]+)?" } }, - "documentation":"

The WorkDocs API is designed for the following use cases:

All Amazon WorkDocs APIs are Amazon authenticated, certificate-signed APIs. They not only require the use of the AWS SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model.

" + "documentation":"

The WorkDocs API is designed for the following use cases:

All Amazon WorkDocs API actions are Amazon authenticated and certificate-signed. They not only require the use of the AWS SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model.

" }