Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bsdpot/nomad-pot-driver#55 adding aliases as pot driver attribute #56

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

einsiedlerkrebs
Copy link
Contributor

referes to #55

@grembo
Copy link
Contributor

grembo commented Apr 30, 2024

Hm, this sounds like it won't scale well, unfortunately.

Like, what do you do if you run 5 or 10 instances of the same services?

I owe you an answer to the networking issues you had in that already closed task. Let me address this asap.

@einsiedlerkrebs
Copy link
Contributor Author

Like, what do you do if you run 5 or 10 instances of the same services?

than it won't work. But if you have only one it does. If you have multiple instances of the same service you probably have them on different machines. This is where the port forward approach is working. This here tries to be a more elegant way for localhost tunnels.

@einsiedlerkrebs
Copy link
Contributor Author

And since its done with name resolution no changes to the pots internal address request need to be done, if this service should be announced differently e.g. via DNS round robin.

@grembo
Copy link
Contributor

grembo commented Apr 30, 2024

I finally described most of our setup to you in #53

Like, what do you do if you run 5 or 10 instances of the same services?

than it won't work. But if you have only one it does. If you have multiple instances of the same service you probably have them on different machines. This is where the port forward approach is working. This here tries to be a more elegant way for localhost tunnels.

Not necessarily, running multiple instances of the same service also happens on the same host when doing deployments. The way we do this using nomad is:

  1. Start a canary deployment with the new version
  2. Wait until it becomes stable
  3. Switch traffic from the old instance
  4. Wait a while
  5. Stop the old instance

After we started the deployment, all of these steps run automatically, orchestrated by nomad.

This way we accomplish zero downtime updates in our cluster and in case a new version never stabilizes, the payload simply stays on the old version (typical example is that a new version requires more configuration or new resources).

See here for the general concept: https://developer.hashicorp.com/nomad/tutorials/job-updates/job-rolling-update

@einsiedlerkrebs
Copy link
Contributor Author

This sound mighty and useful! Not what I am looking for, yet! But I will read on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants