[GE users] Nice level on processes

Reuti reuti at staff.uni-marburg.de
Fri Apr 1 18:03:37 BST 2005

    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]


Quoting Brian R Smith <brian at cypher.acomp.usf.edu>:

> Hey guys,
> I was just curious... I'm running SGE on a COTS-style beowulf and I was
> wondering what others have done with regards to the nice level of
> processes in their queue configuration.  
> On my more up-to-date SMP clusters, I typically set the nice level just
> below the nice for rsh (for mpich, administration) and sge_execd.  For
> the less powerful, single-processor COTS machines, I usually drop it to
> around -7, a few ticks below sge_execd and right on par with the nice I
> have set for the rsh servers on each node.

you mean your user's computations are running at -7? User processes should only 
be in the range 0 to 19. As you noticed, there is no difference if you have 
just one process per CPU - whether it's set to run with nice 0 or 19. Only if 
more user processes are running, you will see a relative usage of the CPU time 
for each of the processes. Having -7 for user processes might block system 

(You are running SGE 5.3 or deactivated the repriorization in 6.0?)

> My tests have shown that there is no apparent performance hit as a
> result of this, granted, these nodes are used strictly for computations
> so there should not be too many interrupts plaguing my processes at run-
> time.  However, I've noticed that on the cheaper COTS AMD boxes, memory
> and disk hungry processes that are not reniced will cause those types of
> machines to stop responding to any and all requests for some period of
> time, often triggering my pager saying that a node is down.  Having the
> processes reniced as I have done seems to have alleviated this issue.
> Has anyone else run into a similar scenario?

Yes, I've seen this behavior if the whole memory space (real ram + scratch 
disk) was completely eaten up by the user process.

Cheers - Reuti

To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net

More information about the gridengine-users mailing list