This project is read-only.

Presequisite and Detailed setup information

The setup prereqs and steps might look a bit scary, but most of the items are probably already installed on your system and others such as Vampir are needed only if you plan to view MPI traces for example...

It is assumed that:
  • The following software has already been installed on the user’s machine:
    • One of: Microsoft Windows 7, Microsoft Windows Vista, Microsoft Windows Server 2008. (we have not tested w Windows XP or below)
    • Microsoft Office 2007, including the Microsoft Excel application (if you use the –plotexcel option)
    • Microsoft Visual Studio 2010.
  • The following functionality is available via a software package of your choice. We have tested with the packages linked:

To run the Dwarfs on a Windows HPC cluster, you need to put the Dwarfs & the Dwarfbench (the driver) on a share that can be accessed by the cluster's nodes.

To build the dwarfs and enable all of the functionality of DwarfBench the following additional software is required:
  • Microsoft Windows HPC Server 2008 SDK. (note 32/64 bit) - Used for building/running MPI code
  • MPI.NET SDK 1.0. - High performance .Net bindings for MPI from Indiana University ; from:
  • SLOG-2 Runtime Environment - Support for viewing MPI traffic via JumpShot (uses Java) Towards the bottom of the page, grab:
  • Microsoft Windows Performance Tools kit - Support for collecting & viewing system traces
  • Microsoft Windows PowerShell 1.0 - The command shell for running dwarfs
  • Vampir - An advanced MPI msg traffic viewer
  • .NET Framework 4.0. (May be already installed by Visual Studio.)

The above is all you need to run on a multicore machine, collect traces, view msg traffic, etc.


To execute on a Microsoft Windows HPC Server 2008 platform the following additional software is required

On the same machine as the parallel dwarfs (a workstation or the cluster's headnode for example)
  • Microsoft Windows HPC Server 2008 Pack client.

And on each node in the cluster
  • MPI.NET 1.0 Runtime. Runtime.msi
  • .NET Framework 4.0.

Note: the redistributable C runtime matching your version of VS needs to be installed on each compute node.

Installing & running the dwarfs on a cluster involves the following steps:
  • Stage the input data on the cluster. The administrative share is a useful location: \\Headnode\CcpSpoolDir\
  • Execute the administrative command to install each of the required packages. Assuming that the head node of your cluster is named “Headnode” and that you have placed the packages listed above in the CcpSpoolDir share on Headnode, the following commands should be sufficient:
  • clusrun msiexec /quiet /passive /qn /norestart /i "\\Headnode\CcpSpoolDir\MPI.NET Runtime.msi"
  • clusrun dotnetfx40fullx86.exe /q /norestart

Once the prereqs are in place, you can run the Setup program in the downloaded directory
  • eg: D:\Dwarfs\Setup

Which will setup your env variables, generate the input files for the benchmarks, and setup your PowerShell environment. You can then use run Exec-Dwarfbench and use the "-cluster" flag (see -help).

Trouble during setup or running the benchmarks

Extensive debug info is dumped in the file: .\Logs when running the Setup program and when running Dwarf-Bench. Please check to see if there are any error or issues in the logs 1st.

Debugging parallel programs

Visual Studio 2005 & 2008 have a built-in MPI Debugger which you can use to debug MPI code locally or on a cluster. The VS2010 also has a multicore debugger & profiler available. For additional MPI debugging support you can also try Allinea's DDT-Lite Visual Studio add-in.

For Admin help setting up & managing an HPC Cluster, please see:

Last edited Jun 22, 2011 at 8:31 PM by RobertPalmer, version 18


No comments yet.