-
Notifications
You must be signed in to change notification settings - Fork 687
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker container security #457
Comments
To handle OpenShift, Ambassador needs to work when run as an arbitrary UID with GID 0. More information is here: https://blog.openshift.com/jupyter-on-openshift-part-6-running-as-an-assigned-user-id/ but the short version is that one can rely on group write permissions (how exactly it's "better" to rely on a known group than on a known user I'm not exactly sure ;) ). |
(We'll likely use |
I'll take a stab at this. Will try to run this on OpenShift, that should take care of some of the security concerns. |
Well, apparently the problem is not only the UIDs, even after running oc logs -f ambassador-6f6696f656-s75r7 -c ambassador
Traceback (most recent call last):
File "/application/kubewatch.py", line 493, in <module>
main()
File "/usr/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/application/kubewatch.py", line 476, in main
sync(restarter)
File "/application/kubewatch.py", line 313, in sync
for x in v1.list_namespaced_config_map(restarter.namespace).items ]
File "/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 12395, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 12497, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 335, in call_api
_preload_content, _request_timeout)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 148, in __call_api
_request_timeout=_request_timeout)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 371, in request
headers=headers)
File "/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 250, in GET
query_params=query_params)
File "/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 240, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon, 11 Jun 2018 08:24:54 GMT', 'Content-Length': '371'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"configmaps is forbidden: User \"system:serviceaccount:myproject:default\" cannot list configmaps in the namespace \"myproject\": User \"system:serviceaccount:myproject:default\" cannot list configmaps in project \"myproject\"","reason":"Forbidden","details":{"kind":"configmaps"},"code":403}
AMBASSADOR: kubewatch sync exited with status 1
Here's the envoy.json we were trying to run with:
ls: /etc/envoy*.json: No such file or directory
No config generated.
AMBASSADOR: shutting down Update 1: Update 2: |
This commit modifies the directories being created in the Dockerfile to be created under /ambassador/ instead of in /etc/. This lets ambassador to be run as a non-root user with no access to /etc/. Fix emissary-ingress#457
This commit modifies the directories being created in the Dockerfile to be created under /ambassador/ instead of in /etc/. This lets ambassador to be run as a non-root user with no access to /etc/. Fix emissary-ingress#457
This commit modifies the directories being created in the Dockerfile to be created under /ambassador/ instead of in /etc/. This lets ambassador to be run as a non-root user with no access to /etc/. Fix emissary-ingress#457
This commit modifies the directories being created in the Dockerfile to be created under /ambassador/ instead of in /etc/. This lets ambassador to be run as a non-root user with no access to /etc/. Fix emissary-ingress#457
This commit modifies the directories being created in the Dockerfile to be created under /ambassador/ instead of in /etc/. This lets ambassador to be run as a non-root user with no access to /etc/. Fix emissary-ingress#457
This commit lets ambassador to be run as a non-root user and moves all ambassador related configurations to /ambassador inside the container. Fix emissary-ingress#457
This commit lets ambassador to be run as a non-root user and moves all ambassador related configurations to /ambassador inside the container. Fix #457
@alexgervais See https://www.getambassador.io/reference/running -- Ambassador 0.35.0 supports running as non-root. Let us know if you run into trouble! |
Just ran a test, non-root is working for me in OpenShift 3.7. |
@PaulM667 Great news!! |
We are running sysdig-falco in our Kubernetes cluster and it is complaining about the following:
k8s.node_name=ip-172-16-12-29.ec2.internal 10:53:58.969608520: Error File below /etc opened for writing (user=root command=python3 /application/kubewatch.py sync /etc/ambassador-config /etc/envoy.json parent=entrypoint.sh pcmdline=entrypoint.sh ./entrypoint.sh file=/etc/ambassador-config-1/payment-service-default.yaml program=python3 gparent=<NA> ggparent=<NA> gggparent=<NA>) k8s.pod=<NA> container=21b0bf1f68db
I would suggest changing the configuration location from
/etc/ambassador-config
to simply/ambassador-config
.I also strongly feel like Ambassador's processes (
entrypoint.sh
,python3
andenvoy
) should be running as non-root user.The text was updated successfully, but these errors were encountered: