Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: support group allocated ports and host_networks #8623

Merged
merged 3 commits into from
Aug 11, 2020

Conversation

nickethier
Copy link
Member

@nickethier nickethier commented Aug 10, 2020

This PR correctly plumbs through allocated group ports to tasks and adds a new ports field to the docker driver to support docker native port mapping with group allocated ports.

Since group ports are shared among all tasks, the docker tasks needs to specify which ports it needs to map. This differs from the old port_map driver config in two ways:

  1. Task ports where always mapped and port_map was just used to override individual port destinations. With the new ports field, only specified labels are mapped.
  2. ports does not include the container port like port_map does because it can be set in the individual port { } stanzas as the to field.

Multi-Interface host networks are also supported with this, setting the host IP correctly based on the configured host_network field of the port.

Ex.

group "example" {
  network {
    port "http" { }
    port "redis" {
      to = 6379
    }
  }
  task "example"
    driver = "docker"
    config {
      ports = ["http", "redis"]
    }
  }
}

fixes #8488

Copy link
Contributor

@notnoop notnoop left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good - thanks for following up!

return c, fmt.Errorf("Trying to map ports but no network interface is available")
ports := newPublishedPorts(logger)
switch {
case task.Resources.Ports != nil && len(driverConfig.Ports) > 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This supports the case where only one source of ports is specified? Should we report an error, if we notice multiple sources as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk if we should prevent the task from running since one of the fields is deprecated. It is documented in the docs that the two fields are incompatible.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't a deprecated incompatible spec result in the job spec being invalid? I can see this easily tripping up people. Or at least log a warning?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could log a warning but I do t think that would help the user if they have to go digging through client logs. With driver plugins, the driver configuration can’t be validated until runtime. There’s an open issue somewhere to explore fixing that but it’s an in-depth change.

So our only option is to return an error and fail the allocation which is totally reasonable. I can argue either way 😅

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense.

This is just my 5c as a user but I'd much rather have unsupported things fail on first attempt with an explanatory error message than scratching my head about why it's not working as intended ^^

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC it's invalid if both group level network as well as task level network is specified? Is that true? If so, we can validate early on job submission regardless of driver?

For driver configuration warnings, we can also emit a TaskEvent, in leu of a log line. It'll be visible with nomad {alloc|job} status, though I agree it's still not very discoverable.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 21, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

docker and group level network
3 participants