Note that much of the functionality of NVSMI is provided by the underlying NVML C-based library. See the NVIDIA developer website link below for more information about NVML. NVML-based python bindings are also available.
The output of NVSMI is not guaranteed to be backwards compatible. However, both NVML and the Python bindings are backwards compatible, and should be the first choice when writing any tools that must be maintained across NVIDIA driver releases.
NVML SDK: http://developer.nvidia.com/nvidia-management-library-nvml/
Python bindings: http://pypi.python.org/pypi/nvidia-ml-py/
- csv - comma separated values (MANDATORY)
- noheader - skip first line with column headers
- nounits - don't print units for numerical values
GPU reset is not guaranteed to work in all cases. It is not recommended for production environments at this time. In some situations there may be HW components on the board that fail to revert back to an initial state following the reset request. This is more likely to be seen on Fermi-generation products vs. Kepler, and more likely to be seen if the reset is being performed on a hung GPU.
Following a reset, it is recommended that the health of the GPU be verified before further use. The nvidia-healthmon tool is a good choice for this test. If the GPU is not healthy a complete reset should be instigated by power cycling the node.
Visit http://developer.nvidia.com/gpu-deployment-kit to download the GDK and nvidia-healthmon.
- Return code 0 - Success
- Return code 2 - A supplied argument or flag is invalid
- Return code 3 - The requested operation is not available on target device
- Return code 4 - The current user does not have permission to access this device or perform this operation
- Return code 6 - A query to find an object was unsuccessful
- Return code 8 - A device's external power cables are not properly attached
- Return code 9 - NVIDIA driver is not loaded
- Return code 10 - NVIDIA Kernel detected an interrupt issue with a GPU
- Return code 12 - NVML Shared Library couldn't be found or loaded
- Return code 13 - Local version of NVML doesn't implement this function
- Return code 14 - infoROM is corrupted
- Return code 15 - The GPU has fallen off the bus or has otherwise become inaccessible
- Return code 255 - Other error or internal driver error occurred
- The driver model currently in use. Always "N/A" on Linux.
- The driver model that will be used on the next reboot. Always "N/A" on Linux.
If any of the fields below return Unknown Error additional Inforom verification check is performed and appropriate warning message is displayed.
- Image Version
- Global version of the infoROM image. Image version just like VBIOS version uniquely describes the exact version of the infoROM flashed on the board in contrast to infoROM object version which is only an indicator of supported features.
- OEM Object
- Version for the OEM configuration data.
- ECC Object
- Version for the ECC recording data.
- Power Object
- Version for the power management data.
Each GOM is designed to meet specific user needs.
In "All On" mode everything is enabled and running at full speed.
The "Compute" mode is designed for running only compute tasks. Graphics operations are not allowed.
The "Low Double Precision" mode is designed for running graphics applications that don't require high bandwidth double precision.
GOM can be changed with the (--gom) flag.
Supported on GK110 M-class and X-class Tesla products from the Kepler family. Not supported on Quadro and Tesla C-class products.
- The GOM currently in use.
- The GOM that will be used on the next reboot.
- PCI bus number, in hex
- PCI device number, in hex
- PCI domain number, in hex
- Device Id
- PCI vendor device id, in hex
- Sub System Id
- PCI Sub System id, in hex
- Bus Id
- PCI bus id as "domain:bus:device.function", in hex
- The current link generation and width. These may be reduced when the GPU is not in use.
- The maximum link generation and width possible with this GPU and system configuration. For example, if the GPU supports a higher PCIe generation than the system supports then this reports the system PCIe generation.
- The type of bridge chip. Reported as N/A if doesn't exist.
- Firmware Version
- The firmware version of the bridge chip. Reported as N/A if doesn't exist.
If all throttle reasons are returned as "Not Active" it means that clocks are running as high as possible.
- Nothing is running on the GPU and the clocks are dropping to Idle state. This limiter may be removed in a later release.
- Application Clocks Setting
- GPU clocks are limited by applications clocks setting. E.g. can be changed using nvidia-smi --applications-clocks=
- SW Power Cap
- SW Power Scaling algorithm is reducing the clocks below requested clocks because the GPU is consuming too much power. E.g. SW power cap limit can be changed with nvidia-smi --power-limit=
- HW Slowdown
- HW Slowdown (reducing the core clocks by a factor of 2 or more) is
This is an indicator of:
* Temperature being too high
* External Power Brake Assertion is triggered (e.g. by the system power supply)
* Power draw is too high and Fast Trigger protection is reducing the clocks
- Some other unspecified factor is reducing the clocks.
- Total size of FB memory.
- Used size of FB memory.
- Available size of FB memory.
- Total size of BAR1 memory.
- Used size of BAR1 memory.
- Available size of BAR1 memory.
"Default" means multiple contexts are allowed per device.
"Exclusive Thread" means only one context is allowed per device, usable from one thread at a time.
"Exclusive Process" means only one context is allowed per device, usable from multiple threads at a time.
"Prohibited" means no contexts are allowed per device (no compute apps).
"EXCLUSIVE_PROCESS" was added in CUDA 4.0. Prior CUDA releases supported only one exclusive mode, which is equivalent to "EXCLUSIVE_THREAD" in CUDA 4.0 and beyond.
For all CUDA-capable products.
Note: During driver initialization when ECC is enabled one can see high GPU and Memory Utilization readings. This is caused by ECC Memory Scrubbing mechanism that is performed during driver initialization.
- Percent of time over the past sample period during which one or more kernels was executing on the GPU. The sample period may be between 1 second and 1/6 second depending on the product.
- Percent of time over the past sample period during which global (device) memory was being read or written. The sample period may be between 1 second and 1/6 second depending on the product.
- The ECC mode that the GPU is currently operating under.
- The ECC mode that the GPU will operate under after the next reboot.
A note about volatile counts: On Windows this is once per boot. On Linux this can be more frequent. On Linux the driver unloads when no active clients exist. Hence, if persistence mode is enabled or there is always a driver client active (e.g. X11), then Linux also sees per-boot behavior. If not, volatile counts are reset each time a compute app is run.
Tesla and Quadro products from the Fermi and Kepler family can display total ECC error counts, as well as a breakdown of errors based on location on the chip. The locations are described below. Location-based data for aggregate error counts requires Inforom ECC object version 2.0. All other ECC counts require ECC object version 1.0.
- Device Memory
- Errors detected in global device memory.
- Register File
- Errors detected in register file memory.
- L1 Cache
- Errors detected in the L1 cache.
- L2 Cache
- Errors detected in the L2 cache.
- Texture Memory
- Parity errors detected in texture memory.
- Total errors detected across entire chip. Sum of Device Memory, Register File, L1 Cache, L2 Cache and Texture Memory.
Double Bit ECC The number of GPU device memory pages that have been retired due to a double bit ECC error.
Single Bit ECC The number of GPU device memory pages that have been retired due to multiple single bit ECC errors.
Pending Checks if any GPU device memory pages are pending retirement on the next reboot. Pages that are pending retirement can still be allocated, and may cause further reliability issues.
- Core GPU temperature. For all discrete and S-class products.
- Power State
- Power State is deprecated and has been renamed to Performance State in 2.285. To maintain XML compatibility, in XML format Performance State is listed in both places.
- Power Management
- A flag that indicates whether power management is enabled. Either "Supported" or "N/A". Requires Inforom PWR object version 3.0 or higher or Kepler device.
- Power Draw
- The last measured power draw for the entire board, in watts. Only available if power management is supported. This reading is accurate to within +/- 5 watts. Requires Inforom PWR object version 3.0 or higher or Kepler device.
- Power Limit
- The software power limit, in watts. Set by software such as nvidia-smi. Only available if power management is supported. Requires Inforom PWR object version 3.0 or higher or Kepler device. On Kepler devices Power Limit can be adjusted using -pl,--power-limit= switches.
- Enforced Power Limit
- The power management algorithm's power ceiling, in watts. Total board power draw is manipulated by the power management algorithm such that it stays under this value. This limit is the minimum of various limits such as the software limit listed above. Only available if power management is supported. Requires a Kepler device.
- Default Power Limit
- The default power management algorithm's power ceiling, in watts. Power Limit will be set back to Default Power Limit after driver unload. Only on supported devices from Kepler family.
- Min Power Limit
- The minimum value in watts that power limit can be set to. Only on supported devices from Kepler family.
- Max Power Limit
- The maximum value in watts that power limit can be set to. Only on supported devices from Kepler family.
- Current frequency of graphics (shader) clock.
- Current frequency of SM (Streaming Multiprocessor) clock.
- Current frequency of memory clock.
- User specified frequency of graphics (shader) clock.
- User specified frequency of memory clock.
- Default frequency of applications graphics (shader) clock.
- Default frequency of applications memory clock.
On GPUs from Fermi family current P0 clocks (reported in Clocks section) can differ from max clocks by few MHz.
- Maximum frequency of graphics (shader) clock.
- Maximum frequency of SM (Streaming Multiprocessor) clock.
- Maximum frequency of memory clock.
- Auto Boost
- Indicates whether auto boost mode is currently enabled for this GPU (On) or disabled for this GPU (Off). Shows (N/A) if boost is not supported. Auto boost allows dynamic GPU clocking based on power, thermal and utilization. When auto boost is disabled the GPU will attempt to maintain clocks at precisely the Current Application Clocks settings (whenever a CUDA context is active). With auto boost enabled the GPU will still attempt to maintain this floor, but will opportunistically boost to higher clocks when power, thermal and utilization headroom allow. This setting persists for the life of the CUDA context for which it was requested. Apps can request a particular mode either via an NVML call (see NVML SDK) or by setting the CUDA environment variable CUDA_AUTO_BOOST.
- Auto Boost Default
- Indicates the default setting for auto boost mode, either enabled (On) or disabled (Off). Shows (N/A) if boost is not supported. Apps will run in the default mode if they have not explicitly requested a particular mode.
- Each Entry is of format "<pid>. <Process name>"
- Used GPU Memory
- Amount memory used on the device by the context. Not available on Windows when running in WDDM mode because Windows KMD manages all the memory not NVIDIA driver.
- Supported on Tesla, GRID and Quadro based products under Linux.
- Limited to Kepler or newer GPUs.
- Displays statistics in CSV format as follows:
- <GPU device index>, <metric name>, <CPU Timestamp in us>, <value for metric>
- The metrics to display with their units are as follows:
- Power samples in Watts.
- GPU, Memory, Encoder and Decoder utilization samples in Percentage.
- Xid error events reported with Xid error code. The error code is 999 for unknown xid error.
- Processor and Memory clock changes in MHz.
- Violation due to Power capping with violation time in ns. (Tesla Only)
- Violation due to Thermal capping with violation boolean flag (1/0). (Tesla Only)
- Any statistic preceded by "#" is a comment.
- Non supported device is displayed as "#<device Index>, Device not supported".
- Non supported metric is displayed as "<device index>, <metric name>, N/A, N/A".
- Violation due to Thermal/Power supported only for Tesla based products. Thermal Violations are limited to Tesla K20 and higher.
- Displays a matrix of available GPUs with the following legend:
X = Self SOC = Path traverses a socket-level link (e.g. QPI) PHB = Path traverses a PCIe host bridge PXB = Path traverses multiple PCIe internal switches PIX = Path traverses a PCIe internal switch
- Firmware Version
- The version of the firmware running on the HIC.
- The color of the LED indicator. Either "GREEN" or "AMBER".
- The reason for the current LED color. The cause may be listed as any combination of "Unknown", "Set to AMBER by host system", "Thermal sensor failure", "Fan failure" and "Temperature exceeds critical limit".
- Air temperature at the unit intake.
- Air temperature at the unit exhaust point.
- Air temperature across the unit board.
- Operating state of the PSU. The power supply state can be any of the following: "Normal", "Abnormal", "High voltage", "Fan failure", "Heatsink temperature", "Current limit", "Voltage below UV alarm threshold", "Low-voltage", "I2C remote off command", "MOD_DISABLE input" or "Short pin transition".
- PSU voltage setting, in volts.
- PSU current draw, in amps.
- The state of the fan, either "NORMAL" or "FAILED".
- For a healthy fan, the fan's speed in RPM.
The -a and -g arguments are now deprecated in favor of -q and -i, respectively. However, the old arguments still work for this release.
* On Linux GPU Reset can't be triggered when there is pending GOM change.
* On Linux GPU Reset may not successfully change pending ECC mode. A full reboot may be required to enable the mode change.
* Under Windows WDDM mode, GPU memory is allocated by Windows at startup and then managed directly. Nvidia-smi reports Used/Free memory from the driver's perspective, so in WDDM mode the results can be misleading. This will likely be fixed in the future.
=== Changes between nvidia-smi v331 Update and v340 ===
* Added reporting of temperature threshold information.
* Added reporting of brand information (e.g. Tesla, Quadro, etc.)
* Added reporting of max, min and avg for samples (power, utilization, clock changes). Example commandline: nvidia-smi -q -d power,utilization, clock
* Added nvidia-smi stats interface to collect statistics such as power, utilization, clock changes, xid events and perf capping counters with a notion of time attached to each sample. Example commandline: nvidia-smi stats
* Added support for collectively reporting metrics on more than one GPU. Used with comma separated with "-i" option. Example: nvidia-smi -i 0,1,2
* Added support for displaying the GPU encoder and decoder utilizations
* Added nvidia-smi topo interface to display the GPUDirect communication matrix (EXPERIMENTAL)
* Added support for displayed the GPU board ID and whether or not it is a multiGPU board
* Removed user-defined throttle reason from XML output
=== Changes between nvidia-smi v5.319 Update and v331 ===
* Added reporting of minor number.
* Added reporting BAR1 memory size.
* Added reporting of bridge chip firmware.
=== Changes between nvidia-smi v4.319 Production and v4.319 Update ===
* Added new --applications-clocks-permission switch to change permission requirements for setting and resetting applications clocks.
=== Changes between nvidia-smi v4.304 and v4.319 Production ===
* Added reporting of Display Active state and updated documentation to clarify how it differs from Display Mode and Display Active state
* For consistency on multi-GPU boards nvidia-smi -L always displays UUID instead of serial number
* Added machine readable selective reporting. See SELECTIVE QUERY OPTIONS section of nvidia-smi -h
* Added queries for page retirement information. See --help-query-retired-pages and -d PAGE_RETIREMENT
* Renamed Clock Throttle Reason User Defined Clocks to Applications Clocks Setting
* On error, return codes have distinct non zero values for each error class. See RETURN VALUE section
* nvidia-smi -i can now query information from healthy GPU when there is a problem with other GPU in the system
* All messages that point to a problem with a GPU print pci bus id of a GPU at fault
* New flag --loop-ms for querying information at higher rates than once a second (can have negative impact on system performance)
* Added queries for accounting procsses. See --help-query-accounted-apps and -d ACCOUNTING
* Added the enforced power limit to the query output
=== Changes between nvidia-smi v4.304 RC and v4.304 Production ===
* Added reporting of GPU Operation Mode (GOM)
* Added new --gom switch to set GPU Operation Mode
=== Changes between nvidia-smi v3.295 and v4.304 RC ===
* Reformatted non-verbose output due to user feedback. Removed pending information from table.
* Print out helpful message if initialization fails due to kernel module not receiving interrupts
* Better error handling when NVML shared library is not present in the system
* Added new --applications-clocks switch
* Added new filter to --display switch. Run with -d SUPPORTED_CLOCKS to list possible clocks on a GPU
* When reporting free memory, calculate it from the rounded total and used memory so that values add up
* Added reporting of power management limit constraints and default limit
* Added new --power-limit switch
* Added reporting of texture memory ECC errors
* Added reporting of Clock Throttle Reasons
=== Changes between nvidia-smi v2.285 and v3.295 ===
* Clearer error reporting for running commands (like changing compute mode)
* When running commands on multiple GPUs at once N/A errors are treated as warnings.
* nvidia-smi -i now also supports UUID
* UUID format changed to match UUID standard and will report a different value.
=== Changes between nvidia-smi v2.0 and v2.285 ===
* Report VBIOS version.
* Added -d/--display flag to filter parts of data
* Added reporting of PCI Sub System ID
* Updated docs to indicate we support M2075 and C2075
* Report HIC HWBC firmware version with -u switch
* Report max(P0) clocks next to current clocks
* Added --dtd flag to print the device or unit DTD
* Added message when NVIDIA driver is not running
* Added reporting of PCIe link generation (max and current), and link width (max and current).
* Getting pending driver model works on non-admin
* Added support for running nvidia-smi on Windows Guest accounts
* Running nvidia-smi without -q command will output non verbose version of -q instead of help
* Fixed parsing of -l/--loop= argument (default value, 0, to big value)
* Changed format of pciBusId (to XXXX:XX:XX.X - this change was visible in 280)
* Parsing of busId for -i command is less restrictive. You can pass 0:2:0.0 or 0000:02:00 and other variations
* Changed versioning scheme to also include "driver version"
* XML format always conforms to DTD, even when error conditions occur
* Added support for single and double bit ECC events and XID errors (enabled by default with -l flag disabled for -x flag)
* Added device reset -r --gpu-reset flags
* Added listing of compute running processes
* Renamed power state to performance state. Deprecated support exists in XML output only.
* Updated DTD version number to 2.0 to match the updated XML output