[GE users] User Time + System Time != Wall Clock Time

Reuti reuti at staff.uni-marburg.de
Thu Apr 3 16:25:50 BST 2008


Hi,

Am 03.04.2008 um 12:24 schrieb Azhar Ali Shah:
> Running a parallel job with MPICH2-1.0.7 + SGE demanding 4  
> processors on my cluster gives following statistics:
>
> Job 152 (DS1001-4P) Complete
> User = aas
> Queue = all.q at xxxx
> Host = smeg.cs.nott.ac.uk
> Start Time = 04/02/2008 20:07:37
> End Time = 04/03/2008 00:09:55
> User Time = 00:00:18
> System Time = 00:00:04
> Wallclock Time = 04:02:18
> CPU = 00:00:22
> Max vmem = 8.551M
> Exit Status = 0
>
> I wonder why user time and system time are so minimum as compared  
> to wall clock time. Earlier to this, I ran same task with same data  
> as a sequential job on single machine that gave following statistics:
>
> ob 35 (batchjob.sh) Complete
> User = aas
> Queue = all.q at xxxx
> Host = smeg.cs.nott.ac.uk
> Start Time = 03/06/2008 17:01:34
> End Time = 03/08/2008 04:50:20
> User Time = 1:01:18:28
> System Time = 06:07:43
> Wallclock Time = 1:11:48:46
> CPU = 1:07:26:11
> Max vmem = 398.684M
> Exit Status = 0
>
> With number of processor being 4 in parallel job I can assume the  
> Wall Clock to be true but I cann't understand the values of User  
> and System time in parallel version above. Any thoughts?

these are the typical symptoms when your application is not tightly  
integrated into SGE. Can you check with "ps -e f" , that you are a)  
using SGE's rsh command and b) all child processes are bound to the  
the sge_execd? Using plain system's /usr/bin/rsh or ssh will  
otherwise lead to such a behavior. If you need ssh, you have to  
recompile SGE on your own to get a custom-built ssh including the  
tight intergration facility.

(BTW: the wallclock time looks more like you used 8 cores IMO)

-- Reuti

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list