You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That looks like right. However after I reboot the whole system, the context of the configmap changes as below :
kubectl get cm coredns -n kube-system -o yaml| more
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153 forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.11.11.11 manager
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration:
.........
**The value of the forward changes from 123.123.123.123 to /etc/resolv.conf. Why? Why does my configuration lost?
It seems that after system reboots, the configmap of coredns is set to default(please see the metadata.annotations:.kubectl.kubernetes.io/last-applied-configuration).
That's very strange. How to keep the config defined in my yaml file remaining after system reboots?**
Environment:
k3s -version
k3s version v1.18.4+k3s1 (97b7a0e)
uname -a
Linux manager 5.2.6-1.el7.elrepo.x86_64 #1 SMP Sun Aug 4 10:13:32 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hello,I have a config yaml file like this:**
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 123.123.123.123
ready :8181
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
I run the command "kubectl apply -f ./coredns_cm.yaml" and then check the configmap, the context is like this:
kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . 123.123.123.123
ready :8181
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.11.11.11 manager
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"Corefile":".:53 {\n errors\n health\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa ip6.a
rpa\n }\n prometheus :9153\n forward . 123.123.123.123\n ready :8181\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"coredns
","namespace":"kube-system"}}
objectset.rio.cattle.io/applied: '{"apiVersion":"v1","data":{"Corefile":".:53
{\n errors\n health\n ready\n kubernetes cluster.local in-addr.arpa
ip6.arpa {\n pods insecure\n upstream\n fallthrough in-addr.arpa
ip6.arpa\n }\n hosts /etc/coredns/NodeHosts {\n reload 1s\n fallthrough\n }\n prometheus
:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{"objectset.rio.cattle.io/id":"","objectset.rio.cattle.
io/owner-gvk":"k3s.cattle.io/v1,
Kind=Addon","objectset.rio.cattle.io/owner-name":"coredns","objectset.rio.cattle.io/owner-namespace":"kube-system"},"labels":{"objectset.rio.cattle.io/hash":"bce283298811743a0386ab510f2f67ef74240c57
"},"name":"coredns","namespace":"kube-system"}}'
objectset.rio.cattle.io/id: ""
objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
objectset.rio.cattle.io/owner-name: coredns
objectset.rio.cattle.io/owner-namespace: kube-system
creationTimestamp: "2020-07-21T09:51:28Z"
labels:
objectset.rio.cattle.io/hash: bce283298811743a0386ab510f2f67ef74240c57............
That looks like right. However after I reboot the whole system, the context of the configmap changes as below :
kubectl get cm coredns -n kube-system -o yaml| more
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
hosts /etc/coredns/NodeHosts {
reload 1s
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
NodeHosts: |
172.11.11.11 manager
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration:
.........
**The value of the forward changes from 123.123.123.123 to /etc/resolv.conf. Why? Why does my configuration lost?
It seems that after system reboots, the configmap of coredns is set to default(please see the metadata.annotations:.kubectl.kubernetes.io/last-applied-configuration).
That's very strange. How to keep the config defined in my yaml file remaining after system reboots?**
Environment:
k3s -version
k3s version v1.18.4+k3s1 (97b7a0e)
uname -a
Linux manager 5.2.6-1.el7.elrepo.x86_64 #1 SMP Sun Aug 4 10:13:32 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux
Thanks a lot!
The text was updated successfully, but these errors were encountered: