By default, the named pipes are placed in /tmp/nvidia-mps/, so I share that directory with the containers using a volume.When CUDA is first initialized in a program, the CUDA driver attempts to connect to the MPS control daemon. If the connection attempt fails, the program continues to run as it normally would without MPS. If however, the connection attempt succeeds, the MPS control daemon proceeds to ensure that an MPS server, launched with same user id as that of the connecting client, is active before returning to the client. The MPS client then proceeds to connect to the server. All communication between the MPS client, the MPS control daemon, and the MPS server is done using named pipes.
But this is not enough for the cuda driver on the container to "see" the MPS server.
Which resources should I share between the host and the container so it can connect to the MPS server?