Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K3s: Fail to get self-defined configmap configuration after system reboots #2052

Closed
hgliu1985 opened this issue Jul 23, 2020 · 2 comments
Closed

Comments

@hgliu1985
Copy link

hgliu1985 commented Jul 23, 2020

Hello,I have a config yaml file like this:**

apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 123.123.123.123
ready :8181
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system

I run the command "kubectl apply -f ./coredns_cm.yaml" and then check the configmap, the context is like this:

kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 123.123.123.123
ready :8181
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.11.11.11 manager
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |

{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.a
rpa\n }\n prometheus :9153\n forward . 123.123.123.123\n ready :8181\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"coredns
","namespace":"kube-system"}}
objectset.rio.cattle.io/applied: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa
ip6.arpa\n }\n hosts /etc/coredns/NodeHosts {\n reload 1s\n fallthrough\n }\n prometheus
:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{"objectset.rio.cattle.io/id":"","objectset.rio.cattle.
io/owner-gvk":"k3s.cattle.io/v1,
Kind=Addon","objectset.rio.cattle.io/owner-name":"coredns","objectset.rio.cattle.io/owner-namespace":"kube-system"},"labels":{"objectset.rio.cattle.io/hash":"bce283298811743a0386ab510f2f67ef74240c57
"},"name":"coredns","namespace":"kube-system"}}'
objectset.rio.cattle.io/id: ""
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: coredns
objectset.rio.cattle.io/owner-namespace: kube-system
creationTimestamp: "2020-07-21T09:51:28Z"
labels:
objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57............

That looks like right. However after I reboot the whole system, the context of the configmap changes as below :

kubectl get cm coredns -n kube-system -o yaml| more
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.11.11.11 manager
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration:
.........
**The value of the forward changes from 123.123.123.123 to /etc/resolv.conf. Why? Why does my configuration lost?
It seems that after system reboots, the configmap of coredns is set to default(please see the metadata.annotations:.kubectl.kubernetes.io/last-applied-configuration).

That's very strange. How to keep the config defined in my yaml file remaining after system reboots?**

Environment:
k3s -version
k3s version v1.18.4+k3s1 (97b7a0e)
uname -a
Linux manager 5.2.6-1.el7.elrepo.x86_64 #1 SMP Sun Aug 4 10:13:32 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux

Thanks a lot!

@brandond
Copy link
Member

brandond commented Jul 23, 2020

k3s bundles manifests for the default components (coredns, traefik, etc) as manifests that are deployed to disk and periodically applied to the cluster. These manifests will overwrite any configuration you apply manually. You can read more about where these files are, and how to disable them, here:
https://rancher.com/docs/k3s/latest/en/architecture/#automatically-deployed-manifests
https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#kubernetes-components

@hgliu1985
Copy link
Author

Thanks a lot, I will try!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants