Parallel Hardware Acceleration


Jump to: navigation, search

When rendering on your local workstation, VisIt will automatically take advantage of any available hardware acceleration. No configuration is necessary on the part of the user. However, VisIt can also take advantage of acceleration hardware in a parallel environment. This support has evolved over time, so there are a variety of interfaces for enabling and customizing that support.


[edit] VisIt 2.3.0 and Later

  • Supports variable number of GPUs per node
  • Supports intermixing of 'HW' and 'SW' processes on a single node
  • Works with any kind of DISPLAY settings, but requires the user to explicitly set the DISPLAY
  • Can work with existing X servers, or launch them itself

From a user standpoint, the support here is the same as the 1.11 support, but there a few new command line options and caveats.

  1. -display takes the DISPLAY that VisIt should utilize. As is the case in other contexts, %l is replaced with the GPU number and %n is replaced by the rank of the given process.
  2. -no-launch-x causes VisIt to not attempt to launch X servers. The X servers must still be running, initiated by the cluster administrator or some other process. VisIt will not behave correctly if the X servers are not accessible to the user running the parallel job. This is the default.
  3. -launch-x causes VisIt to launch X servers as needed.

Additionally, you can now utilize environment variables for some settings. In all cases, the command line option overrides whatever is given via the environment variable.

  1. VISIT_DISPLAY can be utilized in place of the -display option.
  2. VISIT_X_ARGS can be used in place of the -x-args option. This is very useful if you need to pass multiple arguments to X (i.e. a list of arguments with spaces); the many layers of argument parsing code in VisIt tends to break the grouping and thereby chop off some arguments.

[edit] Running on LLNL's edge

I've had some difficulty setting up a host profile for HW acceleration on LLNL's edge machine. However, I finally found a way that gets it done for my simple engine timing purposes. I have to set up an X-server manually and then tell VisIt to use it.

  1. Allocate an mxterm
  2. Run "xinit -- :0 -ac &"
  3. Run VisIt's CLI and enter the following command to start a serial engine with HW acceleration:
OpenComputeEngine("localhost", ("-hw-accel", "-n-gpus-per-node", "1", "-display", ":%l"))

If I come up with a better solution to running on edge with HW acceleration, maybe even creating a host profile, I'll document it here.

[edit] VisIt 1.11 and Later

  • Supports variable number of GPUs per node
  • Supports intermixing of 'HW' and 'SW' processes on a single node
  • Requires DISPLAY settings of :0, :1, :2, and so on.
  • Wants to launch and control the X servers itself.

[edit] VisIt Configuration

The HW acceleration option in the host profile must be enabled, as before. There are two additional command line options to inform VisIt about your cluster's setup.

  • -n-gpus-per-node takes a single integer argument which tells VisIt how many GPUs are available on each node. All nodes must have at least this many GPUs.
  • -x-args allows additional arguments to be used when launching the X server. Be sure to quote the argument if it contains spaces. This string is parsed as a simple format string:
    • %n is replaced with the MPI rank of the process.
    • %l is replaced with the GPU number on that node. It will therefore always be a number between 0 and n-1, where n is the argument given to -n-gpus-per-node.

Unfortunately, there is not currently a section of the GUI which is devoted to these options. The author recommends you add these parameters to the 'Additional Options' section of the host profile.

[edit] VisIt 1.10 and Below

  • Supports only 1 GPU per node
  • Breaks silently if launched in other configurations, i.e. trying to use two GPUs on one node.
  • Requires a DISPLAY of :0
  • Can work with existing X servers, or launch them itself

[edit] Graphical Configuration

Open the 'Advanced' section of the host profile configuration for the cluster of interest. Enable the check box for HW rendering. You may also need to configure 'pre' and 'post' commands; these are especially for those systems which do not normally run an X server, and thus one must be started by VisIt's job.

Alternatively, you can connect to the back-end nodes directly and start an X server before starting VisIt. In this case, you should leave the 'pre' and 'post' commands empty.

If you know you need to start an X server but have no idea how to get started, try

  xinit /bin/sleep 28800 -- :0 -sharevts -terminate -once -ac

for the 'pre' command, and

  killall xinit

for the 'post' command.

[edit] Command line Configuration

The graphical configuration method is just a front end for some command line options. To enable hardware acceleration, you pass the -hw-accel argument when starting VisIt. If you need 'pre' and 'post' commands, pass them as arguments to the -hw-pre and -hw-post options, respectively. Make sure to quote the arguments if they contain any spaces!

[edit] X Server Configuration

The format specifiers can be used to key into different components of your X configuration file. An X configuration could specify multiple GPUs, each driving a different monitor, specified in terms of two layouts. As an excerpt from an 'Xorg.conf' configuration file:

 Section "ServerLayout"
     Identifier     "Layout0"
     Screen      0  "Screen0" 0 0
     Option "SingleCard" "true"
 Section "ServerLayout"
     Identifier     "Layout1"
     Screen      1  "Screen1" 0 0
     Option "SingleCard" "true"
 Section "Monitor"
     Identifier     "Monitor0"
 Section "Monitor"
   Identifier     "Monitor1"
 Section "Device"
     Identifier     "Device0"
     Driver         "nvidia"
     BusID          "PCI:9:0:0"
 Section "Device"
     Identifier     "Device1"
     Driver         "nvidia"
     BusID          "PCI:133:0:0"
 Section "Screen"
     Identifier     "Screen0"
     Device         "Device0"
     Monitor        "Monitor0"
 Section "Screen"
     Identifier     "Screen1"
     Device         "Device1"
     Monitor        "Monitor1"

(please note that irrelevant portions have been culled from this example: the above configuration file will not work unmodified!)

In the above configuration, the cluster has two nvidia cards (two Devices) and two display devices (two Monitors) per node. A Screen is configured for each pair of card and display device. Each Screen is, in turn, part of a Layout, and the Layout is named with the appropriate 'GPU number' concatenated on the end.

For VisIt to key into this configuration, we would want to use the command line option:

 -x-args '-layout Layout%l'

This will cause VisIt to pass the 'layout' option when it starts the X server, giving an argument of the GPU number.

[edit] xinit

Please note that VisIt will use xinit to launch the X servers it needs. xinit is not X, of course, but rather a wrapper which launches the X server and any initial X clients. In particular, there are configuration files through which xinit provides default options for your X servers, notably ${HOME}/.xinitrc and ${HOME}/.xserverrc. There are also sitewide initialization files, but these can vary per-distribution. Look in /etc/X11 on your system and read your distribution's X documentation for more information.

[edit] Confirming that you are getting hardware acceleration

Use the "File | Compute Engines ..." to open up a dialog that details information on the engine. There is an entry for "Processors using GPUs" that details how many processors are rendering on the GPU. For best results, Processors using GPUs should match the total number of processors.

Personal tools