You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the use-case of having separate clusters within the same AWS system it is not needed to fallback to pre-signed upload links since the cluster is still internal to osparc-simcore. That would solve the issue of uploading files >5GB.
BUT:
these links are still created with an expiration time (this is S3 policy which makes sense). Therefore currently the dv2 creates these upload links with an expiration time. If the computational service fails to complete within that time, then the upload link becomes invalid. (NOTE: this also happens on the default cluster)
A quick & dirty fix is to set that expiration time to an insane value,
A better one is to make the dask-sidecar use the osparc PublicAPI which now allows to upload files >5GB, that would require passing API key/secret to the dask-sidecar with the rights to upload, and then upload to the correct location (is it actually possible??) /projects_id/node_id/output... should not go in the public API
After discussion, a more sustainable version of this is to have a separate service (clusters-keeper? or another) that provides an entrypoint solely for computational workers to access upload links. Then an authentication per worker would even be possible.
The content you are editing has changed. Please copy your edits and refresh the page.
sanderegg
changed the title
Use S3 Envs for external internal clusters to go over 5Gb
Computational backend: Use S3 Envs for AWS clusters that live inside the simcore stack
Aug 23, 2023
For the use-case of having separate clusters within the same AWS system it is not needed to fallback to pre-signed upload links since the cluster is still internal to osparc-simcore. That would solve the issue of uploading files >5GB.
BUT:
A better one is to make the dask-sidecar use the osparc PublicAPI which now allows to upload files >5GB, that would require passing API key/secret to the dask-sidecar with the rights to upload, and then upload to the correct location (is it actually possible??) /projects_id/node_id/output...should not go in the public APIBaklava
The text was updated successfully, but these errors were encountered: