-
Notifications
You must be signed in to change notification settings - Fork 2k
ARM64 Support #214
Comments
Not right now but we will probably look into it for nvidia-docker 2.0 |
I made the same request to some Nvidia Engineers during last GTC2016 in Amsterdam. I hope that this feature will come as soon as possible. It's very interesting to use container into TX1 som. |
Have you succeeded running docker on TK1/TX1? scosaje [email protected]于2016年10月10日 周一 22:16写道:
|
thanks guys. would it be possible to use linux "cgroups on mesos" on arm as If anyone can, and is interested pls send mail direct to Thanks. On Wed, Oct 12, 2016 at 11:54 AM, Gotrek77 [email protected] wrote:
|
@3XX0 Hi Jonathan, when do you think we will see ARM support? |
I can't really say, we have other priorities right now and the ARM support is somewhat tricky. |
Hello 3XX0, there is some news on nvidia-docker support to jetson tx1? Thanks |
Also interested in this topic. @3XX0, can you describe in more detail what would be the changes needed for getting support for ARM 64 bits?. (targeting TX1) It seems this is a rather popular topic. My group has some bandwidth and wouldn't mind looking into this. Thanks, |
It's pretty complicated and it's going to take a lot of time. First, we won't support ARM until the new 2.0 runtime becomes stable and works flawlessly on x86 (hopefully we will publish it soon). Secondly, we need to work with the L4T kernel team to support containers there. And lastly, we need to reimplement all the driver logic to work with the Tegra iGPU/dGPU stack bearing in mind that it needs to support all of our architectures (Drive PXs will probably be first) and that our Tegra drivers are drastically changing. |
HI @3XX0, how can do to rise up priority to this feature? It's very important to us to have nvidia-docker on tx1. |
Hello guys, are there any news about the nvidia-docker version that can support Jeston TK1?. |
Hello guys, |
Hello guys, |
I just got my TX2 and I am also interested in running everything as docker containers, so an update on this will be very helpful |
@Gotrek77 @GONZALORUIZ sorry but you're on your own for now, it's not on our near-term roadmap. |
We are developing vision application running on top of docker environment and our customers are looking for local solution where TX2 will fit very nicely. Could you please please increase priority of this request? |
@flx42 we Need this for our product. There are different people that ask to increase priority or ask how can push your management in this direction. It very important to know a roadmap to have this feature. |
Bump, because Docker would be great. Also looking for cloud GTU platform access API. |
As an alternative to nvidia-docker until official support is available, we were able to get Docker running on the TX-2. Need to do some kernel modifications and pass in some parameters to Docker containers so they have access to the GPU, but it is working for those that want to try it. You can check out this GitHub repo for more information, Tegra-Docker |
I installed docker on my TX1 today. I ran into a make issue with CONFIG_CGROUP_HUGETLB while building the custom kernel, so I omitted the optional CONFIG_CGROUP_HUGETLB change and the Image was built successfully. Docker images (e.g. FROM arm64v8/ubuntu:16.04) can now be used on the TX1. The Tegra-Docker solution works (I have verified), but still feels a little like a work-around. Also, I believe the docker version (1.12.6 currently) they recommend is a little dated. It might be better to build from source or use a debian package. BTW, the Tegra-Docker solution sources/references the Jetsonhacks blog (link below) for custom kernel build instructions... Tegra-Docker: building custom kernels: TX1/TX2 docker-ce arm64: stable docker-ce install instructions: ubuntu To check kernel compatibility with docker, run the check-compatibility.sh script from docker site: -K nvidia@tegra-ubuntu:~/bin$ bash check-config.sh Generally Necessary:
Optional Features:
Limits:
nvidia@tegra-ubuntu:~/bin$ docker info nvidia@tegra-ubuntu:~/docker/dq$ docker run --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-as-gpu -v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra device_query CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA Tegra X1" deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = NVIDIA Tegra X1 |
Hi all, Thanks |
Any updates? |
+1 for this feature. |
What's the point of having the docker-friendly kernel in the jetpack 3.2 if there's no docker for the jetson tx2 running the jetpack? :( |
^ |
Regular docker will work fine as long as you don't need GPU access. If you do need access to the GPU in your containers you need to make sure you give your containers access to some specific libraries and devices. Can read the details here, GitHub-Tegra_Docker |
Is this a feature on the current roadmap for nvidia-docker? Although the above comment points to a valid workaround, it doesn't have all the nice features that |
+1 docker-nvidia 2.0 not support tx1 |
Sorry, it's still not on the roadmap, given than the driver stacks are very different today. |
We use Tegra-Docker on TX2 as a workaround, but we really hope that nvidia-docker can add support for tx2 platform offically. |
Is there any chance that this feature will appear on the roadmap given the recent release of the Jetson Xavier? |
Hi, Is such a feature planned now for the newer devices? Any roadmaps ? |
I am waiting too..! |
I am waiting too |
... also waiting! |
Hi, I have created a docker image on Jetson TX2 which contains Nvidia drivers, CUDA and Cudnn libraries. I am trying to give access of GPU and CUDA drivers to this image through tx2-docker script (https://github.com/Technica-Corporation/Tegra-Docker) but no success. I think tx2-docker is running successfully which you can see below: wkh@tegra-ubuntu:~/Tegra-Docker/bin$ ./tx2-docker run openhorizon/aarch64-tx2-cudabase But when I try to run devicequery inside my container, it give me the result: root@bc1130fc6be4:/usr/local/cuda-8.0/samples/1_Utilities/deviceQuery# ./deviceQuery CUDA Device Query (Runtime API) version (CUDART static linking) cudaGetDeviceCount returned 38 Any comment! Why this script is not giving access? |
How's this feature coming along? |
Looking for this for the Jetson Nano. |
Could this run on non-nvidia platform? I mean if I use a arm device with GPU, if possible I can install a docker version with GPU support, then my application could make use of GPU on arm device? |
This is the support matrix for the recently released 1.1.0 package on Tuesday 19-May:
Please let us know if this resolves your issue. |
Listed as supported. Closed as resolved. |
Is there a version of the nvidia-docker for arm platform. specifically for the NVidia Jetson TK1 and TX1
The text was updated successfully, but these errors were encountered: