How to customize nvidia-smi 's output to show PID username

nvidia-smi kill process
nvidia-smi update every second
nvidia-smi --gpu-reset
nvidia-smi persistence mode
nvidia-smi nvlink
nvidia smi on loop
nvidia-smi diagnostic
watch nvidia-smi windows

the normal output of nvidia-smi looks like this:

Thu May 10 09:05:07 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:0A:00.0 Off |                  N/A |
| 61%   74C    P2   195W / 250W |   5409MiB / 11172MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      5973      C   ...master_JPG/build/tools/program_pytho.bin  4862MiB |
|    0     46324      C   python                                       537MiB |
+-----------------------------------------------------------------------------+

As you can see it shows the list of PIDs which are running the CPU. However I also want to know the names of the PIDs. Can I customize the output to show the username of each PID ? I already know how to show username of individual PID:

ps -u -p $pid

Please help me. Thank you very much.

UPDATE: I've posted the solution that worked for me below. I've also uploaded this to Github as a simple script for those who need detailed GPU information:

https://github.com/ManhTruongDang/check-gpu

I created a script that takes nvidia-smi output and enriches it with some more information: https://github.com/peci1/nvidia-htop .

It is a python script that parses the GPU process list, parses the PIDs, runs them through ps to gather more information, and then substitutes the nvidia-smi's process list with the enriched listing.

Example of use:

$ nvidia-smi | nvidia-htop.py -l
Mon May 21 15:06:35 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.25                 Driver Version: 390.25                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  Off  | 00000000:04:00.0 Off |                  N/A |
| 53%   75C    P2   174W / 250W |  10807MiB / 11178MiB |     97%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  Off  | 00000000:05:00.0 Off |                  N/A |
| 66%   82C    P2   220W / 250W |  10783MiB / 11178MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GeForce GTX 108...  Off  | 00000000:08:00.0 Off |                  N/A |
| 45%   67C    P2    85W / 250W |  10793MiB / 11178MiB |     51%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
|  GPU   PID     USER    GPU MEM  %MEM  %CPU  COMMAND                                                                                               |
|    0  1032 anonymou   10781MiB   308   3.7  python train_image_classifier.py --train_dir=/mnt/xxxxxxxx/xxxxxxxx/xxxxxxxx/xxxxxxx/xxxxxxxxxxxxxxx  |
|    1 11021 cannotte   10765MiB   114   1.5  python3 ./train.py --flagfile /xxxxxxxx/xxxxxxxx/xxxxxxxx/xxxxxxxxx/xx/xxxxxxxxxxxxxxx                |
|    2 25544 nevermin   10775MiB   108   2.0  python -m xxxxxxxxxxxxxxxxxxxxxxxxxxxxx                                                               |
+-----------------------------------------------------------------------------+

[PDF] nvidia-smi, will force the driver model to change. Will impact all GPUs unless a single GPU is specified using the -i argument. A reboot is required for the change to take  nvidia-smi dmon -h GPU statistics are displayed in scrolling format with one line per sampling interval. Metrics to be monitored can be adjusted based on the width of terminal window. Metrics to be monitored can be adjusted based on the width of terminal window.

Useful nvidia-smi Queries, nvidia-smi --query-gpu=gpu_name,gpu_bus_id,vbios_version Add a custom cron job to /var/spool/cron/crontabs to call the script at the  nvidia-smi dmon -h GPU statistics are displayed in scrolling format with one line per sampling interval. Metrics to be monitored can be adjusted based on the width of terminal window. Metrics to be monitored can be adjusted based on the width of terminal window.

I did it with nvidia-smi -q -x which is XML style output of nvidia-smi

ps -up `nvidia-smi -q -x | grep pid | sed -e 's/<pid>//g' -e 's/<\/pid>//g' -e 's/^[[:space:]]*//'`

nvidia-smi: Control Your GPUs, Running a simple nvidia-smi query as root will initialize all the cards and If you need to change settings on your cards, you'll want to look at  NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. I installed and reinstalled nvidia drivers for different versions and still have the same problem.

Jay Stanley, I could alias Junwon Lee's command using xargs as follows:

alias gpu_user_usage="nvidia-smi -q -x | grep pid | sed -e 's/<pid>//g' -e 's/<\/pid>//g' -e 's/^[[:space:]]*//' | xargs ps -up"

(I could not comment due to reputation limitations...)

How to customize nvidia-smi 's output to show PID username, I've also uploaded this to Github as a simple script for those who need detailed GPU information: https://github.com/ManhTruongDang/check-gpu  Change the directory location to the folder where nvidia-smi is located. Type cd C:\Program Files\NVIDIA Corporation\NVSMI into the DOS window and press enter. Type nvidia-smi -l 10 in the DOS window and press enter. This will instruct nvidia-smi to refresh every 10 seconds.

nvidia-smi - NVIDIA System Management , A reboot is required for the change to take place. See Driver Model for more information on Windows driver models. -r, --gpu-reset  How to customize nvidia-smi 's output to show PID username. Ask Question Asked 1 year, 11 months ago. Active 1 month ago. Viewed 6k times 9. 3. the normal output of

DOS GPU Usage, Change the directory location to the folder where nvidia-smi is located. Type cd C​:\Program Files\NVIDIA Corporation\NVSMI into the DOS  How to use the Hashcat to find missing BitLocker password https://www.youtube.com/watch?v=J17613vXDFg BitCracker is the first open source password cracking t

nvidia_smi, You must have the nvidia-smi tool installed and your NVIDIA GPU(s) must Edit the python.d/nvidia_smi.conf configuration file using edit-config from the your  The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. The toolkit includes a container runtime library and utilities to automatically configure containers to leverage NVIDIA GPUs. Full documentation and frequently asked questions are available on the repository wiki.

Comments
  • "However I also want to know the names of the PIDs". It already shows that
  • @talonmies No. I want the names of the users of these PIDS. See my answer for more information
  • a number of related topics are in this question
  • When I tried this, the output at the bottom does not seem to be lined up corrected ly, especially when the path in the "COMMAND" column is very long
  • @DangManhTruong Could you raise it as an issue on github together with the output in your terminal?
  • how to get gpu utilization for each process?
  • @debonair I'm not sure if it's even possible... At least nvidia-smi doesn't provide such information
  • how to get gpu utilization for each process?
  • ..oh, it should be \s\+
  • This is great! When I try to alias this, bash tries to execute the PID as commands. Do you have a tip for aliasing it?