Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding hosts for windows containers (--add-host, extra_hosts) does not work #1455

Closed
Dresel opened this issue Dec 22, 2017 · 37 comments
Closed

Comments

@Dresel
Copy link

Dresel commented Dec 22, 2017

Expected behavior

Either docker run --add-host argument or docker compose extra_hosts option should add host entry to etc/host for windows containers.

Actual behavior

Neither docker run --add-host argument nor docker compose extra_hosts option does add host entry to etc/host for windows containers.

Information

  • Windows: Windows 10, 1709
  • Docker: 17.12.0-ce-rc4
  • Compose: 1.18.0

Steps to reproduce the behavior

Not working example for windows container

  1. Run docker run -it --add-host me:127.0.0.1 microsoft/nanoserver
  2. Run ping me

Additional working example for linux container

  1. Run docker run -it --add-host me:127.0.0.1 ubuntu
  2. Run apt-get update
  3. Run apt-get install iputils-ping
  4. Run ping me
@JackUkleja
Copy link

I also see this behavior.

@hamish-riahi
Copy link

I also have the same problem

@pasiorovuo
Copy link

Same behavior confirmed.

@artisticcheese
Copy link

Is not this supposed to be filed under moby/moby?

@icenold
Copy link

icenold commented Feb 9, 2018

anyone has a workaround for this?

@cnickel
Copy link

cnickel commented Feb 9, 2018

@icenold, I did the following within the Dockerfile, it has worked for my needs.

RUN $file = $Env:windir+'\System32\drivers\etc\hosts'; `
'10.0.0.1 some.host.com' | Add-Content -PassThru $file;

note that i use the back tick as my escape character.

@artisticcheese
Copy link

Probably much easier just

 "10.0.0.1 some.host.com" | Out-File -Append "$($Env:windir)\drivers\etc\hosts"

@icenold
Copy link

icenold commented Feb 9, 2018

is there no way to do this via compose?

@artisticcheese
Copy link

It's bugged out as you can see on this thread. It does not do what is expected. I beleive this shall be filed under moby/mody not docker for win.

@icenold
Copy link

icenold commented Mar 30, 2018

any update/s?

@ghost
Copy link

ghost commented Apr 24, 2018

I have the same issue. Any updates?

@zachChilders
Copy link

I also ran into this issue today. I seem to be unable to host map at all on microsoft/servercore, even with the above workarounds.

@cblackuk
Copy link

cblackuk commented Apr 25, 2018

@zachChilders
The only way I managed to configure it was with echo... PowerShell would not work for some reason...

RUN echo 10.0.0.1 somehost.domain.local >> "C:\Windows\System32\drivers\etc\hosts"

Not sure it helps...

@lkts
Copy link

lkts commented May 20, 2018

any updates on this ?

@jtaylor100
Copy link

It's a real pain this isn't supported. It would be really helpful to dynamically add hosts from the command line for development and staging purposes where real host names are not practical to add.

@tkoestler
Copy link

I've run into the same issue. I need to add some host entries purely for development. Works great on linux containers but we have cases where we must be able to run on Window containers as well. This seems fairly fundamental...

@starcraft66
Copy link

starcraft66 commented Aug 28, 2018

I have run into this issue as well and it is a platform-breaking feature in my opinion. The only option to work around this seems to be manually adding entries to the hosts file in a "RUN" directive (but this only works when building on windows!!). If I build a linux image on a linux how, any modifications I make to the hosts file inside the image using a RUN directive are ignored, which means I won't be able to use it properly when pushing and pulling it to a windows LCOW host.

EDIT: Just to be clear, I am encountering this problem both on Windows and Linux containers, I tried the reproducer @Dresel posted above with ubuntu and it doesn't work, when I try to ping me, I get ping: me: No address associated with hostname and the /etc/hosts file is completely empty.

I am running Docker CE Stable 18.06.0-ce-win72 (19098) with experimental features enabled on Windows 10 Professional.

@poliez
Copy link

poliez commented Nov 20, 2018

I have run into this issue as well and it is a platform-breaking feature in my opinion. The only option to work around this seems to be manually adding entries to the hosts file in a "RUN" directive (but this only works when building on windows!!). If I build a linux image on a linux how, any modifications I make to the hosts file inside the image using a RUN directive are ignored, which means I won't be able to use it properly when pushing and pulling it to a windows LCOW host.

EDIT: Just to be clear, I am encountering this problem both on Windows and Linux containers, I tried the reproducer @Dresel posted above with ubuntu and it doesn't work, when I try to ping me, I get ping: me: No address associated with hostname and the /etc/hosts file is completely empty.

I am running Docker CE Stable 18.06.0-ce-win72 (19098) with experimental features enabled on Windows 10 Professional.

I'm reporting the same exact issue. Any updates?

@Arash-Sabet
Copy link

Same issue here. I think it makes sense to expedite addressing this problem.

@rpramodd
Copy link

rpramodd commented Jan 9, 2019

we are facing the same issue. --add-host is not working, --network option is not supported.

@icenold
Copy link

icenold commented Jan 11, 2019

still no update?

@imphasing
Copy link

This is kind of annoying because you end up using an ENTRYPOINT script if you want to do it in a generic way.. for my use-case I need to get some network level configuration which is not available during image time, so the ENTRYPOINT ends up running a script to update hosts then start a process

This is janky as hell, it would be really nice to do it from another layer up in the stack, like a compose file or command line option. Any update on this issue?

@kamavarapu
Copy link

Please update.

@Iristyle
Copy link

Iristyle commented May 3, 2019

I think that this probably needs to be filed in https://github.com/docker/cli/issues to get visibility

Iristyle added a commit to Iristyle/pupperware that referenced this issue May 3, 2019
 - Remove the domain introspection / setting of AZURE_DOMAIN env var
   as this does not work as originally thought.

   Instead, hardcode the DNS suffix `.internal` to each service in the
   compose stack, and make sure that `dns_search` for `internal` will
   use the Docker DNS resolver when dealing with these hosts. Note that
   these compose file settings only affect the configuration of the
   DNS resolver, *not* resolv.conf. This is different from the
   docker run behavior, which *does* modify resolv.conf. Also note,
   config file locations vary depending on whether or not systemd is
   running in the container.

   It's not "safe" to refer to services in the cluster by only their
   short service names like `puppet`, `puppetdb` or `postgres` as they
   can conflict with hosts on the external network with these names
   when `resolv.conf` appends DNS search suffixes.

   When docker compose creates the user defined network, it copies the
   DNS settings from the host to the `resolv.conf` in each of the
   containers. This often takes search domains from the outside network
   and applies them to containers.

   When network resolutions happen, any default search suffix will be
   applied to short names when the dns option for ndots is not set to 0.
   So for instance, given a `resolv.conf` that contains:

   search delivery.puppetlabs.net

   A DNS request for `puppet` becomes `puppet.delivery.puppetlabs.net`
   which will fail to resolve in the Docker DNS resolver, then be sent
   to the next DNS server in the `nameserver` list, which may resolve it
   to a different host in the external network. This behaves this way
   because `resolv.conf` also sets secondary DNS servers from the host.

   While it is possible to try and service requests for an external
   domain like `delivery.puppetlabs.net` with the embedded Docker DNS
   resolver, it's better to instead choose a domain suffix to use inside
   the cluster.

   There are some good details on how various network types configure:
   docker/for-linux#488 (comment)

 - Note that the .internal domain is typically not recommended for
   production given the only IANA reserved domains are .example, .test,
   .invalid or .localhost. However, given the DNS resolver is set to
   own the resolution of .internal, this is a compromise.

   In production its recommended to use a subdomain of a domain that
   you own, but that's not yet configurable in this compose file. A
   future commit will make this configurable.

 - Another workaround for this problem would be to set the ndots option
   in resolv.conf to 0 per the documentation at
   http://man7.org/linux/man-pages/man5/resolv.conf.5.html

   However that can't be done for two reasons:

   - docker-compose schema doesn't actually support setting DNS options
     docker/cli#1557

   - k8s sets ndots to 5 by default, so we don't want to be at odds

 - A further, but implausible workaround would be to modify the host DNS
   settings to remove any search suffixes.

 - The original FQDN change being reverted in this commit was introduced
   in 2549f19

   "
   Lastly, the Windows specific docker-compose.windows.yml sets up a
   custom alias in the "default" network so that an extra DNS name for
   puppetserver can be set based on the FQDN that Facter determines.
   Without this additional DNS reservation, the `puppetserver ca`
   command will be unable to connect to the REST endpoint.

   A better long-term solution is making sure puppetserver is setup to
   point to `puppet` as the host instead of an FQDN.
   "

   With the PUPPETSERVER_HOSTNAME value set on the puppetserver
   container, both certname and server are set to puppet.internal,
   preventing a need to synchronize a domain name.

 - Note that at this time there is also a discrepancy in how Facter 3
   behaves vs Facter 2.

   The Facter 2 gem is being used by the `puppetserver ca` gem based
   application, and may return a different value for
   Facter.value('domain') than calling `facter domain` at the command
   line.  Such is the case inside the puppet network, where Facter 2
   returns `ops.puppetlabs.net` while Facter 3 returns the value
   `delivery.puppetlabs.net`

   This discrepancy makes it so that the `puppetserver ca` application
   cannot find the client side cert on disk and fails outright.

   Facter 2 should not be included in the puppetserver packages, and
   changes have been made to packaging for future releases.

   For now, setting PUPPETSERVER_HOSTNAME configuration value in the
   puppetserver container will set the `puppet.conf` values explicitly
   to the desired DNS name to work around this problem.

 - Resolution of `postgres.internal` seems to rely on having the
   `hostname` value explicitly defined in the docker-compose file, even
   though hostname values supposedly don't interact with DNS in docker

 - This PR is also made possible by switching over to using the Ubuntu
   based container from the Alpine container (performed in a prior
   commit), due to DNS resolution problems with Alpine inside LCOW:

   moby/libnetwork#2371
   microsoft/opengcs#303

 - Another avenue that was investigated to resolve the DNS problem in
   Alpine was to feed host:ip mappings in through --add-host, but it
   turns out that Windows doesn't yet support that feature per

   docker/for-win#1455

 - Finally, these changes are also made in preparation of switching the
   pupperware-commercial repo over to a private builder
Iristyle added a commit to Iristyle/pupperware that referenced this issue May 4, 2019
 - Remove the domain introspection / setting of AZURE_DOMAIN env var
   as this does not work as originally thought.

   Instead, hardcode the DNS suffix `.internal` to each service in the
   compose stack, and make sure that `dns_search` for `internal` will
   use the Docker DNS resolver when dealing with these hosts. Note that
   these compose file settings only affect the configuration of the
   DNS resolver, *not* resolv.conf. This is different from the
   docker run behavior, which *does* modify resolv.conf. Also note,
   config file locations vary depending on whether or not systemd is
   running in the container.

   It's not "safe" to refer to services in the cluster by only their
   short service names like `puppet`, `puppetdb` or `postgres` as they
   can conflict with hosts on the external network with these names
   when `resolv.conf` appends DNS search suffixes.

   When docker compose creates the user defined network, it copies the
   DNS settings from the host to the `resolv.conf` in each of the
   containers. This often takes search domains from the outside network
   and applies them to containers.

   When network resolutions happen, any default search suffix will be
   applied to short names when the dns option for ndots is not set to 0.
   So for instance, given a `resolv.conf` that contains:

   search delivery.puppetlabs.net

   A DNS request for `puppet` becomes `puppet.delivery.puppetlabs.net`
   which will fail to resolve in the Docker DNS resolver, then be sent
   to the next DNS server in the `nameserver` list, which may resolve it
   to a different host in the external network. This behaves this way
   because `resolv.conf` also sets secondary DNS servers from the host.

   While it is possible to try and service requests for an external
   domain like `delivery.puppetlabs.net` with the embedded Docker DNS
   resolver, it's better to instead choose a domain suffix to use inside
   the cluster.

   There are some good details on how various network types configure:
   docker/for-linux#488 (comment)

 - Note that the .internal domain is typically not recommended for
   production given the only IANA reserved domains are .example, .test,
   .invalid or .localhost. However, given the DNS resolver is set to
   own the resolution of .internal, this is a compromise.

   In production its recommended to use a subdomain of a domain that
   you own, but that's not yet configurable in this compose file. A
   future commit will make this configurable.

 - Another workaround for this problem would be to set the ndots option
   in resolv.conf to 0 per the documentation at
   http://man7.org/linux/man-pages/man5/resolv.conf.5.html

   However that can't be done for two reasons:

   - docker-compose schema doesn't actually support setting DNS options
     docker/cli#1557

   - k8s sets ndots to 5 by default, so we don't want to be at odds

 - A further, but implausible workaround would be to modify the host DNS
   settings to remove any search suffixes.

 - The original FQDN change being reverted in this commit was introduced
   in 2549f19

   "
   Lastly, the Windows specific docker-compose.windows.yml sets up a
   custom alias in the "default" network so that an extra DNS name for
   puppetserver can be set based on the FQDN that Facter determines.
   Without this additional DNS reservation, the `puppetserver ca`
   command will be unable to connect to the REST endpoint.

   A better long-term solution is making sure puppetserver is setup to
   point to `puppet` as the host instead of an FQDN.
   "

   With the PUPPETSERVER_HOSTNAME value set on the puppetserver
   container, both certname and server are set to puppet.internal,
   preventing a need to synchronize a domain name.

 - Note that at this time there is also a discrepancy in how Facter 3
   behaves vs Facter 2.

   The Facter 2 gem is being used by the `puppetserver ca` gem based
   application, and may return a different value for
   Facter.value('domain') than calling `facter domain` at the command
   line.  Such is the case inside the puppet network, where Facter 2
   returns `ops.puppetlabs.net` while Facter 3 returns the value
   `delivery.puppetlabs.net`

   This discrepancy makes it so that the `puppetserver ca` application
   cannot find the client side cert on disk and fails outright.

   Facter 2 should not be included in the puppetserver packages, and
   changes have been made to packaging for future releases.

   For now, setting PUPPETSERVER_HOSTNAME configuration value in the
   puppetserver container will set the `puppet.conf` values explicitly
   to the desired DNS name to work around this problem.

 - Resolution of `postgres.internal` seems to rely on having the
   `hostname` value explicitly defined in the docker-compose file, even
   though hostname values supposedly don't interact with DNS in docker

 - This PR is also made possible by switching over to using the Ubuntu
   based container from the Alpine container (performed in a prior
   commit), due to DNS resolution problems with Alpine inside LCOW:

   moby/libnetwork#2371
   microsoft/opengcs#303

 - Another avenue that was investigated to resolve the DNS problem in
   Alpine was to feed host:ip mappings in through --add-host, but it
   turns out that Windows doesn't yet support that feature per

   docker/for-win#1455

 - Finally, these changes are also made in preparation of switching the
   pupperware-commercial repo over to a private builder
Iristyle added a commit to Iristyle/pupperware that referenced this issue May 4, 2019
 - Remove the domain introspection / setting of AZURE_DOMAIN env var
   as this does not work as originally thought.

   Instead, hardcode the DNS suffix `.internal` to each service in the
   compose stack, and make sure that `dns_search` for `internal` will
   use the Docker DNS resolver when dealing with these hosts. Note that
   these compose file settings only affect the configuration of the
   DNS resolver, *not* resolv.conf. This is different from the
   docker run behavior, which *does* modify resolv.conf. Also note,
   config file locations vary depending on whether or not systemd is
   running in the container.

   It's not "safe" to refer to services in the cluster by only their
   short service names like `puppet`, `puppetdb` or `postgres` as they
   can conflict with hosts on the external network with these names
   when `resolv.conf` appends DNS search suffixes.

   When docker compose creates the user defined network, it copies the
   DNS settings from the host to the `resolv.conf` in each of the
   containers. This often takes search domains from the outside network
   and applies them to containers.

   When network resolutions happen, any default search suffix will be
   applied to short names when the dns option for ndots is not set to 0.
   So for instance, given a `resolv.conf` that contains:

   search delivery.puppetlabs.net

   A DNS request for `puppet` becomes `puppet.delivery.puppetlabs.net`
   which will fail to resolve in the Docker DNS resolver, then be sent
   to the next DNS server in the `nameserver` list, which may resolve it
   to a different host in the external network. This behaves this way
   because `resolv.conf` also sets secondary DNS servers from the host.

   While it is possible to try and service requests for an external
   domain like `delivery.puppetlabs.net` with the embedded Docker DNS
   resolver, it's better to instead choose a domain suffix to use inside
   the cluster.

   There are some good details on how various network types configure:
   docker/for-linux#488 (comment)

 - Note that the .internal domain is typically not recommended for
   production given the only IANA reserved domains are .example, .test,
   .invalid or .localhost. However, given the DNS resolver is set to
   own the resolution of .internal, this is a compromise.

   In production its recommended to use a subdomain of a domain that
   you own, but that's not yet configurable in this compose file. A
   future commit will make this configurable.

 - Another workaround for this problem would be to set the ndots option
   in resolv.conf to 0 per the documentation at
   http://man7.org/linux/man-pages/man5/resolv.conf.5.html

   However that can't be done for two reasons:

   - docker-compose schema doesn't actually support setting DNS options
     docker/cli#1557

   - k8s sets ndots to 5 by default, so we don't want to be at odds

 - A further, but implausible workaround would be to modify the host DNS
   settings to remove any search suffixes.

 - The original FQDN change being reverted in this commit was introduced
   in 2549f19

   "
   Lastly, the Windows specific docker-compose.windows.yml sets up a
   custom alias in the "default" network so that an extra DNS name for
   puppetserver can be set based on the FQDN that Facter determines.
   Without this additional DNS reservation, the `puppetserver ca`
   command will be unable to connect to the REST endpoint.

   A better long-term solution is making sure puppetserver is setup to
   point to `puppet` as the host instead of an FQDN.
   "

   With the PUPPETSERVER_HOSTNAME value set on the puppetserver
   container, both certname and server are set to puppet.internal,
   preventing a need to synchronize a domain name.

 - Note that at this time there is also a discrepancy in how Facter 3
   behaves vs Facter 2.

   The Facter 2 gem is being used by the `puppetserver ca` gem based
   application, and may return a different value for
   Facter.value('domain') than calling `facter domain` at the command
   line.  Such is the case inside the puppet network, where Facter 2
   returns `ops.puppetlabs.net` while Facter 3 returns the value
   `delivery.puppetlabs.net`

   This discrepancy makes it so that the `puppetserver ca` application
   cannot find the client side cert on disk and fails outright.

   Facter 2 should not be included in the puppetserver packages, and
   changes have been made to packaging for future releases.

   For now, setting PUPPETSERVER_HOSTNAME configuration value in the
   puppetserver container will set the `puppet.conf` values explicitly
   to the desired DNS name to work around this problem.

 - Resolution of `postgres.internal` seems to rely on having the
   `hostname` value explicitly defined in the docker-compose file, even
   though hostname values supposedly don't interact with DNS in docker

 - This PR is also made possible by switching over to using the Ubuntu
   based container from the Alpine container (performed in a prior
   commit), due to DNS resolution problems with Alpine inside LCOW:

   moby/libnetwork#2371
   microsoft/opengcs#303

 - Another avenue that was investigated to resolve the DNS problem in
   Alpine was to feed host:ip mappings in through --add-host, but it
   turns out that Windows doesn't yet support that feature per

   docker/for-win#1455

 - Finally, these changes are also made in preparation of switching the
   pupperware-commercial repo over to a private builder
Iristyle added a commit to Iristyle/pupperware that referenced this issue May 6, 2019
 - Remove the domain introspection / setting of AZURE_DOMAIN env var
   as this does not work as originally thought.

   Instead, hardcode the DNS suffix `.internal` to each service in the
   compose stack, and make sure that `dns_search` for `internal` will
   use the Docker DNS resolver when dealing with these hosts. Note that
   these compose file settings only affect the configuration of the
   DNS resolver, *not* resolv.conf. This is different from the
   docker run behavior, which *does* modify resolv.conf. Also note,
   config file locations vary depending on whether or not systemd is
   running in the container.

   It's not "safe" to refer to services in the cluster by only their
   short service names like `puppet`, `puppetdb` or `postgres` as they
   can conflict with hosts on the external network with these names
   when `resolv.conf` appends DNS search suffixes.

   When docker compose creates the user defined network, it copies the
   DNS settings from the host to the `resolv.conf` in each of the
   containers. This often takes search domains from the outside network
   and applies them to containers.

   When network resolutions happen, any default search suffix will be
   applied to short names when the dns option for ndots is not set to 0.
   So for instance, given a `resolv.conf` that contains:

   search delivery.puppetlabs.net

   A DNS request for `puppet` becomes `puppet.delivery.puppetlabs.net`
   which will fail to resolve in the Docker DNS resolver, then be sent
   to the next DNS server in the `nameserver` list, which may resolve it
   to a different host in the external network. This behaves this way
   because `resolv.conf` also sets secondary DNS servers from the host.

   While it is possible to try and service requests for an external
   domain like `delivery.puppetlabs.net` with the embedded Docker DNS
   resolver, it's better to instead choose a domain suffix to use inside
   the cluster.

   There are some good details on how various network types configure:
   docker/for-linux#488 (comment)

 - Note that the .internal domain is typically not recommended for
   production given the only IANA reserved domains are .example, .test,
   .invalid or .localhost. However, given the DNS resolver is set to
   own the resolution of .internal, this is a compromise.

   In production its recommended to use a subdomain of a domain that
   you own, but that's not yet configurable in this compose file. A
   future commit will make this configurable.

 - Another workaround for this problem would be to set the ndots option
   in resolv.conf to 0 per the documentation at
   http://man7.org/linux/man-pages/man5/resolv.conf.5.html

   However that can't be done for two reasons:

   - docker-compose schema doesn't actually support setting DNS options
     docker/cli#1557

   - k8s sets ndots to 5 by default, so we don't want to be at odds

 - A further, but implausible workaround would be to modify the host DNS
   settings to remove any search suffixes.

 - The original FQDN change being reverted in this commit was introduced
   in 2549f19

   "
   Lastly, the Windows specific docker-compose.windows.yml sets up a
   custom alias in the "default" network so that an extra DNS name for
   puppetserver can be set based on the FQDN that Facter determines.
   Without this additional DNS reservation, the `puppetserver ca`
   command will be unable to connect to the REST endpoint.

   A better long-term solution is making sure puppetserver is setup to
   point to `puppet` as the host instead of an FQDN.
   "

   With the PUPPETSERVER_HOSTNAME value set on the puppetserver
   container, both certname and server are set to puppet.internal,
   inside of puppet.conf, preventing a need to inject a domain name as
   was done previously.

   This is necessary because of a discrepancy in how Facter 3 behaves vs
   Facter 2, which creates a mismatch between how the host cert is
   initially generated (using Facter 3) and how `puppetserver ca`
   finds the files on disk (using Facter 2), that setting
   PUPPETSERVER_HOSTNAME will explicitly work around.

   Specifically, Facter 2 may return a different Facter.value('domain')
   than calling `facter domain` using Facter 3 at the command line.
   Such is the case inside the puppet network, where Facter 2 returns
   `ops.puppetlabs.net` while Facter 3 returns `delivery.puppetlabs.net`

	 Without explicitly setting PUPPETSERVER_HOSTNAME, this makes cert
   files on disk get written as *.delivery.puppetlabs.net, yet the
   `puppetserver ca` application looks for the client certs on disk as
   *.ops.puppetlabs.net, which causes `puppetserver ca` to fail.

 - Facter 2 should not be included in the puppetserver packages, and
   changes have been made to packaging for future releases, which may
   remove the need for the above.

 - This PR is also made possible by switching over to using the Ubuntu
   based container from the Alpine container (performed in a prior
   commit), due to DNS resolution problems with Alpine inside LCOW:

   moby/libnetwork#2371
   microsoft/opengcs#303

 - Another avenue that was investigated to resolve the DNS problem in
   Alpine was to feed host:ip mappings in through --add-host, but it
   turns out that Windows doesn't yet support that feature per

   docker/for-win#1455

 - Finally, these changes are also made in preparation of switching the
   pupperware-commercial repo over to a private builder
Iristyle added a commit to Iristyle/pupperware that referenced this issue May 6, 2019
 - Remove the domain introspection / setting of AZURE_DOMAIN env var
   as this does not work as originally thought.

   Instead, hardcode the DNS suffix `.internal` to each service in the
   compose stack, and make sure that `dns_search` for `internal` will
   use the Docker DNS resolver when dealing with these hosts. Note that
   these compose file settings only affect the configuration of the
   DNS resolver, *not* resolv.conf. This is different from the
   docker run behavior, which *does* modify resolv.conf. Also note,
   config file locations vary depending on whether or not systemd is
   running in the container.

   It's not "safe" to refer to services in the cluster by only their
   short service names like `puppet`, `puppetdb` or `postgres` as they
   can conflict with hosts on the external network with these names
   when `resolv.conf` appends DNS search suffixes.

   When docker compose creates the user defined network, it copies the
   DNS settings from the host to the `resolv.conf` in each of the
   containers. This often takes search domains from the outside network
   and applies them to containers.

   When network resolutions happen, any default search suffix will be
   applied to short names when the dns option for ndots is not set to 0.
   So for instance, given a `resolv.conf` that contains:

   search delivery.puppetlabs.net

   A DNS request for `puppet` becomes `puppet.delivery.puppetlabs.net`
   which will fail to resolve in the Docker DNS resolver, then be sent
   to the next DNS server in the `nameserver` list, which may resolve it
   to a different host in the external network. This behaves this way
   because `resolv.conf` also sets secondary DNS servers from the host.

   While it is possible to try and service requests for an external
   domain like `delivery.puppetlabs.net` with the embedded Docker DNS
   resolver, it's better to instead choose a domain suffix to use inside
   the cluster.

   There are some good details on how various network types configure:
   docker/for-linux#488 (comment)

 - Note that the .internal domain is typically not recommended for
   production given the only IANA reserved domains are .example, .test,
   .invalid or .localhost. However, given the DNS resolver is set to
   own the resolution of .internal, this is a compromise.

   In production its recommended to use a subdomain of a domain that
   you own, but that's not yet configurable in this compose file. A
   future commit will make this configurable.

 - Another workaround for this problem would be to set the ndots option
   in resolv.conf to 0 per the documentation at
   http://man7.org/linux/man-pages/man5/resolv.conf.5.html

   However that can't be done for two reasons:

   - docker-compose schema doesn't actually support setting DNS options
     docker/cli#1557

   - k8s sets ndots to 5 by default, so we don't want to be at odds

 - A further, but implausible workaround would be to modify the host DNS
   settings to remove any search suffixes.

 - The original FQDN change being reverted in this commit was introduced
   in 2549f19

   "
   Lastly, the Windows specific docker-compose.windows.yml sets up a
   custom alias in the "default" network so that an extra DNS name for
   puppetserver can be set based on the FQDN that Facter determines.
   Without this additional DNS reservation, the `puppetserver ca`
   command will be unable to connect to the REST endpoint.

   A better long-term solution is making sure puppetserver is setup to
   point to `puppet` as the host instead of an FQDN.
   "

   With the PUPPETSERVER_HOSTNAME value set on the puppetserver
   container, both certname and server are set to puppet.internal,
   inside of puppet.conf, preventing a need to inject a domain name as
   was done previously.

   This is necessary because of a discrepancy in how Facter 3 behaves vs
   Facter 2, which creates a mismatch between how the host cert is
   initially generated (using Facter 3) and how `puppetserver ca`
   finds the files on disk (using Facter 2), that setting
   PUPPETSERVER_HOSTNAME will explicitly work around.

   Specifically, Facter 2 may return a different Facter.value('domain')
   than calling `facter domain` using Facter 3 at the command line.
   Such is the case inside the puppet network, where Facter 2 returns
   `ops.puppetlabs.net` while Facter 3 returns `delivery.puppetlabs.net`

	 Without explicitly setting PUPPETSERVER_HOSTNAME, this makes cert
   files on disk get written as *.delivery.puppetlabs.net, yet the
   `puppetserver ca` application looks for the client certs on disk as
   *.ops.puppetlabs.net, which causes `puppetserver ca` to fail.

 - Facter 2 should not be included in the puppetserver packages, and
   changes have been made to packaging for future releases, which may
   remove the need for the above.

 - This PR is also made possible by switching over to using the Ubuntu
   based container from the Alpine container (performed in a prior
   commit), due to DNS resolution problems with Alpine inside LCOW:

   moby/libnetwork#2371
   microsoft/opengcs#303

 - Another avenue that was investigated to resolve the DNS problem in
   Alpine was to feed host:ip mappings in through --add-host, but it
   turns out that Windows doesn't yet support that feature per

   docker/for-win#1455

 - Finally, these changes are also made in preparation of switching the
   pupperware-commercial repo over to a private builder

 - Additionally update k8s / Bolt specs to be consistent with updated
   naming
@Cloudmersive
Copy link

This is a huge problem! Any updates on this?

@duki994
Copy link

duki994 commented May 16, 2019

For me, extra_hosts work in compose in Windows 10 latest update.

Docker version: 18.09.2
Compose: 1.23.2

@icenold
Copy link

icenold commented May 16, 2019

For me, extra_hosts work in compose in Windows 10 latest update.

Docker version: 18.09.2
Compose: 1.23.2

you mean linux containers on windows10 host or windows containers on windows10 host?

@duki994
Copy link

duki994 commented May 17, 2019

@icenold
Linux containers on windows 10 (latest update -> from a day or two ago) host.

@bormm
Copy link

bormm commented May 21, 2019

For me, extra_hosts work in compose in Windows 10 latest update.

Docker version: 18.09.2
Compose: 1.23.2

This issue is about windows containers not about linux containers. Linux on Windows is working completely differently.
Just to be clear: The mentioned version is still not working.

@docker-robott
Copy link
Collaborator

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale comment.
Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle stale

@TCGV
Copy link

TCGV commented Aug 19, 2019

Same issue here, docker desktop version 2.1.0.1 (37199)

Iristyle added a commit to Iristyle/pupperware that referenced this issue Aug 26, 2019
 - LCOW has a bug where --add-host is not supported for `docker run`
   and extra_hosts is not supported for docker-compose.yml:
   docker/for-win#1455
	 moby/moby#30555

 - To workaround this issue, in the run_agent method, write a /etc/hosts
   file and map that directly into the container by mounting a temp
   directory like c:\windows\temp\abcd:/etc
Iristyle added a commit to Iristyle/pupperware that referenced this issue Aug 27, 2019
 - LCOW has a bug where --add-host is not supported for `docker run`
   and extra_hosts is not supported for docker-compose.yml:
   docker/for-win#1455
   moby/moby#30555

 - To workaround this issue, in the run_agent method, write a /etc/hosts
   file and map that directly into the container by mounting a temp
   directory like c:\windows\temp\abcd:/etc
@sobriant74
Copy link

+1 same problem on dockers for windows.

@mikolaj-jankowski
Copy link

any news?

@norcis
Copy link

norcis commented Jan 20, 2020

Still not working :(

@mat007
Copy link
Member

mat007 commented Jan 20, 2020

This does not look like an issue with the Docker Desktop application itself but with the upstream docker windows container implementation so I'm closing this issue. Could you please open an issue on https://github.com/moby/moby and/or https://github.com/docker/compose instead as that is the more appropriate place.

@mat007 mat007 closed this as completed Jan 20, 2020
@Roemer
Copy link

Roemer commented Apr 29, 2020

As this still does not work, adding the values to the hosts file during the build of the Docker image is still a good workaround.
I had quite a few issues getting it to work correctly when using the SHELL ["powershell"...] and then adding lines to the hosts file as then every character had an added space. Probably due to encodings.
Using Add-Content always gave an error Stream was not readable so I was stuck with echo.
The way I got it working with multiple entries is as follows:

SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]

RUN cmd /c \"echo 1.2.3.4 host1 >> C:\Windows\System32\drivers\etc\hosts\"; `
    cmd /c \"echo 5.6.7.8 host2 >> C:\Windows\System32\drivers\etc\hosts\";

@docker-robott
Copy link
Collaborator

Closed issues are locked after 30 days of inactivity.
This helps our team focus on active issues.

If you have found a problem that seems similar to this, please open a new issue.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows.
/lifecycle locked

@docker docker locked and limited conversation to collaborators Jun 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests