Parallel Processing

The Magnetics solver is able to run in parallel. It can use shared memory parallel (SMP), e.g. running with parallel threads on one computer (node) and also distributed memory parallel (DMP), e.g. running spread over several nodes in a cluster. The combination of both SMP and DMP is also possible and very efficient, e.g. running on a cluster while each node uses parallel threads. The DMP feature is possible since version 860. In the following sections we explain how to set up and run that way. In the following, we will often use the file name of the model to be solved. Therefore we call this ’modelname’. The modelname is the core of the solver input file name. Such solver input files result if in NX or Simcenter the solve option ’Write Solver Input File’ is run. File extensions are ’.pro’, ’.msh’ and others. If the model for example is the tutorial ’Team24’ with the transient solution, then the ’modelname’ will be ’Team24-transient’.

SMP on one Computer

If running on one computer, SMP is automatically active.

DMP on one Computer

If running on one computer, DMP can be activated quite simple as follows:

The value 2 in the above example will activate two parallel processes on this computer. The value can be set as desired, but the recommendation is not to use more that the available number of processors or cores on the computer. Best performance for DMP can be expected if the computer has more than one processor.

DMP on Computer Cluster

To run DMP on a computer cluster proceed as follows.

  1. All computers (nodes) must have a Windows operating system. We recommend the version Windows 10 or 11. Possible are also Windows 7 and Windows Server.

  2. The nodes must be connected over a network. That network should be of very high performance because of the necessary communication between the processes. If the network is not of high performance, the solve time will be very slow.

  3. On each node there must be the same user defined. For this example the user name is ’mpiuser’. The password for this user must be the same on all nodes. Administrator rights are required for that user.

  4. On each node there must be a work-directory (wdir) defined at the same path. In this example we use ’C:\workdir\Magnetics’. These directories have to be shared in the network. The mpiuser needs to have read and write permission. Into this directory some files from the MAGNETICS installation folder must be copied. These are

  5. On each node the user ’mpiuser’ must log in. This can easily be done by remote desktop connection. Using a command shell (CMD) the command ’smpd’ must be started with two arguments. The first argument is ’-p’ and defines the network port through which the solve processes will communicate. Choose a free port there, we will use 19020. The second argument is ’-d’ and defines the log verbosity. We recommend 3. So, the whole command is this:

    Be aware that this smpd process is quite sensitive. In several cases it will hang and must be stopped and restarted. Also take care to use the correct CMD shell. It must be the CMD from the Windows operating system and NOT the NX command prompt.

  6. One node will be defined to be the master node. On that master node we will have the solver input files and we will execute the solve command (mpiexec.exe). The master node will spread all child processes to the remaining slave nodes. The master node will have the highest load because several routines that cannot run in parallel will run on the master node.

  7. The following solver input files must be copied to the master node work dir:

  8. A cluster configuration file (modelname.mpi) must be in the work directory of the master node. The entries are these:
    mpiexec -hosts NumNodes Node_1 NumProcessesOnNode_1 Node_2 NumProcessesOnNode_2 ... -p NetworkPort -wdir NameOfSharedWorkdir

    An example for that modelname.mpi file text content follows. Here, we use the two nodes ’Gaius’ and ’Caligula’, each running one process while ’Gaius’ is the master node:

  9. To start the whole parallel solve process the command ’Magnetics.exe’, following some arguments must be executed in a command shell on the master node. The arguments are explained below

    An example for that CMD command is this:

  10. The parallel solve will start. On each node the Windows firewall will probably block the new process ’MagneticsSolveC.exe’ or ’MagneticsSolveR.exe’. The execution must then be allowed. On the master node there will be the result files available after finish.

SMP and DMP combined on Computer Cluster

SMP will automatically be active on each node of a cluster solve.