-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use multiple S3 buckets #3214
Comments
@mifi are multiple buckets supported? Should that be possible if not? |
Uploading each file to multiple destinationsI assume what you wanted to do was to upload each file to multiple S3 buckets instead of one. I'm not sure if we want to implement this because it would change the uppy/companion APIs. i.e. each uploaded file would have to return an array of upload success objects or trigger multiple success events each. Maybe you can add a lambda trigger on the s3 bucket to clone it to a different bucket? Multiple companion instancesI think companion was never designed to be instantiated many times. Although judging companion's API ( I think the error you're getting is because of the
I think that in order for multiple instances of companion to be supported, the components that use globals/singletons need to be rewritten. A quick search reveals that these modules may have to be rewritten to support this:
|
Thanks for your time @mifi. Yes, I also found out that |
Cool. Out of curiosity, may you share what is your use case for uploading to multiple s3 buckets? |
Well, our system includes several functions, some of which operate with different buckets for the purpose of organization, permissions, etc. For me, the use of multiple buckets was nothing special. There are several reasons why a server needs to support multiple buckets or even multiple AWS accounts. So I think it would at least make sense if we could use multiple Uppy instances in the long run, even if they might not run optimized then. If my workaround hadn't worked, my only option would have been to set up a second server. |
I think it would be great if we could at least fix this error. You could say that it is not officially supported, but for now the workaround above works for us, so I guess people should be using it at their own risk? |
I think just removing object.freeze is not the right way to do it, because then two app instances created with different configs will compete to overwrite this singleton variable, as well as cause issues with the other singletons/globals that are created. I think it boils down to whether we want to do the effort of rewriting all the singleton global state and instead place it inside the companion instance / closure. if not ,then I think we should throw a better error message if someone tries to initialize the companion app twice. |
It's hard for me to tell whether it's common to want multiple app instances in this setup and therefore worth the rewrite. I'll leave that choice to you and/or @kvz. If not then I agree a specific error message is a nice quick improvement. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
closing this but i created a new feature request for allowing people to implement their own uploaders: #4390 |
Hello, I'm trying to connect a second S3 bucket to my Node.js companion server.
After looking at the configuration, I did not find any option to specify more than one bucket for the same instance, so I thought about adding a second instance on another route.
So far so good, but as soon as I start the server, I get this error:
I can temporary fix this error by commenting out this line:
Now, my question is, if this is a bug or how to support multiple buckets otherwise?
The text was updated successfully, but these errors were encountered: