How to connect to Nvidia MPS server from a Docker container?

You can find all the latest computer hardware press releases in here.
Post Reply
Eden79
Legit Little One
Legit Little One
Posts: 1
Joined: Wed Dec 04, 2019 1:09 am

How to connect to Nvidia MPS server from a Docker container?

Post by Eden79 » Wed Dec 04, 2019 1:12 am

I want to overlap the use of the GPU by many docker containers. Nvidia provides an utility to do this: the Multi Process Service which is documented here. Specifically it says:
When CUDA is first initialized in a program, the CUDA driver attempts to connect to the MPS control daemon. If the connection attempt fails, the program continues to run as it normally would without MPS. If however, the connection attempt succeeds, the MPS control daemon proceeds to ensure that an MPS server, launched with same user id as that of the connecting client, is active before returning to the client. The MPS client then proceeds to connect to the server. All communication between the MPS client, the MPS control daemon, and the MPS server is done using named pipes.
By default, the named pipes are placed in /tmp/nvidia-mps/, so I share that directory with the containers using a volume.

But this is not enough for the cuda driver on the container to "see" the MPS server.

Which resources should I share between the host and the container so it can connect to the MPS server?

Post Reply