You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Prior to upload to S3, we can gzip, brotli, and deflate (and potentially other compressions as well) all assets and upload all of the versions (tack on the compression method as an extension, e.g. image.png.br or image.png.gz). Then, in a pre-cloudfront lambda@edge (yes, this means it is run on EVERY request) we read the Accept-Encoding header and then create a CUSTOM x-compression (or w/e) header with a value of the best compression option the user will accept, tell cloudfront to cache based on our custom header value (and pass it along to the origin), and then in a pre-origin lambda@edge, read that value, and then forward the request to the origin after we update the uri to include the compression extension. (edited)
the lambda@edge pre-origin will only get called when assets of that compression type aren’t cached in cloudfront, but the lambda@edge pre-origin will get called every time. It’ll likely add about 10-20ms to the request and cost $0.60 every million requests, but being able to serve brotli will reduce transfer size by ~20%… so given transfer costs and depending on size of assets, it could end up saving money (and almost guaranteed it’ll reduce transfer times for the user) (edited)
(The pre-cloudfront lambda@edge extra effort is b/c cloudfront strips everything except gzip out of the accept-encoding header. Cloudfront only supports gzip natively. Hopefully one day they’ll stop stripping out the other encoding types and then we could potentially drop that extra lambda.)
Pretty much all evergreen browsers (including mobile) support brotli https://caniuse.com/#feat=brotli
The text was updated successfully, but these errors were encountered:
Prior to upload to S3, we can gzip, brotli, and deflate (and potentially other compressions as well) all assets and upload all of the versions (tack on the compression method as an extension, e.g. image.png.br or image.png.gz). Then, in a pre-cloudfront lambda@edge (yes, this means it is run on EVERY request) we read the
Accept-Encoding
header and then create a CUSTOMx-compression
(or w/e) header with a value of the best compression option the user will accept, tell cloudfront to cache based on our custom header value (and pass it along to the origin), and then in a pre-origin lambda@edge, read that value, and then forward the request to the origin after we update the uri to include the compression extension. (edited)the lambda@edge pre-origin will only get called when assets of that compression type aren’t cached in cloudfront, but the lambda@edge pre-origin will get called every time. It’ll likely add about 10-20ms to the request and cost $0.60 every million requests, but being able to serve brotli will reduce transfer size by ~20%… so given transfer costs and depending on size of assets, it could end up saving money (and almost guaranteed it’ll reduce transfer times for the user) (edited)
(The pre-cloudfront lambda@edge extra effort is b/c cloudfront strips everything except gzip out of the accept-encoding header. Cloudfront only supports gzip natively. Hopefully one day they’ll stop stripping out the other encoding types and then we could potentially drop that extra lambda.)
Pretty much all evergreen browsers (including mobile) support brotli https://caniuse.com/#feat=brotli
The text was updated successfully, but these errors were encountered: