[GE users] Correct accounting with mpich-mx tight integration

Reuti reuti at staff.uni-marburg.de
Fri May 11 11:21:45 BST 2007


Am 10.05.2007 um 23:43 schrieb Chris Rudge:

> Yes, I'm using mpiexec. I believe that newer versions of PBSPro  
> will have
> functional equivalents to mpiexec as standard.

So you see nearly in this cluster:

wallclock time * no. of cpus = cpu-time ?

Whether wallclock should be the pure time of the job in the cluster,  
or also be multiplied by the number of reserved slots (as some might  
charge the users also for reserved but unused CPUs) are both in some  
way right. But I must admit, that the mutiplication by n+1 is  
worthless, but I'm not aware of any built-in solution in SGE, besides  
writing a script which will just honor the main task and multiply (or  
not) this value by the number of CPUs.

-- Reuti


>> Are you using the plain mpirun in the PBSPro cluster (and hence the
>> accounting is only done for the master process as there is still no
>> qrsh AFAIK), or the mpiexec replacement from http://www.osc.edu/~pw/
>> mpiexec/index.php to use the TM-Interface to start the tasks on the
>> slave nodes? (based on my Torque knowledge)
>>
>> -- Reuti
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list