-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lack of network connectivity between Fargate pods and self-managed workers #1196
Comments
@TBeijen the actual security groups management is quite messy since we still support the legacy. Today, we don't need to create the cluster security group anymore, but sounds like we're still doing it. There is a need for a code cleaning and refactoring. With that said, I think there is a variable for your use case |
@barryib Yup, just this morning found Looking at https://github.com/terraform-aws-modules/terraform-aws-eks/pull/858/files#diff-2fdb488192d2afd49fb090fcc8bd32fd3af72bcb789420915e78d6406ef9e2e4L4, the current legacy-compatible security groups are still there. Moving workers into a submodule has great potential for cleanup. Things that spring to mind:
Is there a sort of high-level roadmap for this type of progress? I'd gladly help out (given time, which differs greatly per week). |
I actually had this same problem today and eventually found I am happy to write the code to make this simpler if the maintainers want to point me in a high level direction that will integrate nicely with the current roadmap. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity since being marked as stale. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I have issues
I'm adding a Fargate profile to a cluster consisting of self-managed launch template workers. I notice there's no network connectivity between the Fargate pods and the pod running on the EC2 nodes. This is due to the cluster_security_group not being set on autoscaling EC2 workers.
I'm submitting a...
What is the current behavior?
Hybrid clusters consisting of Fargate pods and self-managed autoscaling groups lack network connectivity. Pods can interact with kubelet, and all kubelets can interact with control plane. So from Kubernetes perspective, all pods seem healthy. However network connectivity between pods on Fargate and pods on regular EC2 instances is impossible, e.g.
If this is a bug, how to reproduce? Please include a code sample if relevant.
Adding security group to workers: TBeijen@c949473
What's the expected behavior?
Full network connectivity between self-managed workers, managed node groups and Fargate pods
Are you able to fix this problem and submit a PR? Link here if you have already.
Yes
Environment details
Any other relevant info
Things to consider:
vpc_config.cluster_security_group
output as primary cluster security group id #828(Toally out of scope of just this issue) What's status on any (if any) refactor plans on launch template worker groups? Also considering hard-to-fix problems like #737 which seem to originate from an over-usage of random-pet.
The text was updated successfully, but these errors were encountered: