Documentation update on max_concurrency behaviour in download_blob due to urllib3 connection pool limit #38054
Labels
Client
This issue points to a problem in the data-plane of the library.
customer-reported
Issues that are reported by GitHub users external to the Azure organization.
needs-team-attention
Workflow: This issue needs attention from Azure service team or SDK team
question
The issue doesn't require a change to the product in order to be resolved. Most issues start as that
Service Attention
Workflow: This issue is responsible by Azure service team.
Storage
Storage Service (Queues, Blobs, Files)
Type of issue
Missing information
Description
The Azure Storage SDK’s
download_blob
method allows users to set themax_concurrency
parameter to enable parallel downloads for blobs larger than 64MB. By increasingmax_concurrency
, developers can potentially speed up blob downloads by using multiple connections simultaneously.However, the underlying implementation of
download_blob
relies onurllib3
, which has a default connection pool size of 10. Whenmax_concurrency
is set to a value higher than the default pool size, this triggers a warning:Connection pool is full, discarding connection
This behaviour can lead to inefficiencies and confusion, as developers may assume
max_concurrency
controls the number of connections directly, without realising that the connection pool size needs to be adjusted accordingly.Suggested Improvements:
This small clarification can prevent warnings and ensure that users get the expected performance when downloading large blobs with high concurrency.
Page URL
https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob.blobclient?view=azure-python#azure-storage-blob-blobclient-download-blob
Content source URL
https://github.com/MicrosoftDocs/azure-docs-sdk-python/blob/main/docs-ref-autogen/azure-storage-blob/azure.storage.blob.BlobClient.yml
Document Version Independent Id
9ee6555a-aaca-243f-409e-1ac5881e3dbc
Article author
@lmazuel
Metadata
The text was updated successfully, but these errors were encountered: