[GE users] Scheduler policies

reuti reuti at staff.uni-marburg.de
Tue Aug 25 21:21:16 BST 2009

Am 25.08.2009 um 19:51 schrieb jfprieur:

> Thank you for your quick answer, yes I realise my question is huge,  
> your answer is exactly what I was looking for.
> I am happy to see that my initial gut instinct was right, I had  
> created a 'serial' pe for this purpose so will go ahead and set it  
> up that way.

I would find it confusing to call a PE "serial". Jobs using more than  
one thread are parallel jobs. To avoid setting the %nslots= by hand,  
youn could use something like this awk script, as the value has to be  
set after each --Link1-- command anyway:

#$ -m ea
#$ -A g03_parallel
#$ -R y
#$ -pe smp 2
export g03root=/opt/chemsoft/$ARC/Gaussian_03_D.01_binary
. $g03root/g03/bsd/g03.profile
awk -v mynproc="%nprocs=$NSLOTS" -f ~soft/scripts/ 
subg03_nproc_rwf.awk < /home/reuti/test363.com > test363.in.parallel
g03 < test363.in.parallel > /home/reuti/test363.log

with the awk script (a copy of the original input file will be  
created and changed, leaving the original file untouched):

=== TOP OF DATA ===

         if (mynproc)
             { print mynproc }


/^ *%nproc/    { next }

/^ *-*Link1/   { print

                  if (mynproc)
                      { print mynproc }


{ print }

-- Reuti

> Will look into the scheduling stuff man pages with your info.
> Once again, thanks for the help,
> JF
> 2009/8/25 templedf <dan.templeton at sun.com>
> Wow.  That's a lot of background information you're asking for.  Let's
> see if I can give you the summary version.  (You can get the details
> from the docs.)
> Slots are an attribute of the queue.  Slots are also a complex and can
> be set as a complex at the host level if you want to limit the  
> number of
> slots in use across all the queues on a host.  Grid Engine does not
> limit a job to the number of slots that it requests.  It's up to the
> user to be honest about how many slots are needed.  For SMP jobs, you
> can create an SMP PE that lets you requests multiple slots on a single
> machine.
> In the scheduler configuration (sched_conf(5)), you'll find the
> load_formula attribute.  If you want to fill up hosts, you can set  
> it to
> "-np_load_avg", and it will choose the host with the highest load
> average for each job.  Another solution is to set the
> job_load_adjustments to "NONE".  That will have the scheduler pick the
> least loaded host and put as many jobs there as it can.
> Daniel
> jfprieur wrote:
> > Hello,
> >
> > I am a relative noob to the field of cluster computing, I have
> > succesfully deployed a Rocks cluster (20 nodes, 8 cores and 8GB/ 
> node,
> > default SGE install puts 8 slots/nodes which is fine) and jobs are
> > running fine on a basic level.
> >
> > I am now starting to tweak the queue configurations. This cluster  
> only
> > runs serial jobs for now but some of those eg. Gaussian, use SMP.  
> The
> > problem right now is that if the user sets 4 CPU's in his input  
> file,
> > when he qsubs a job, it still only takes 1 slot. I find a lot of
> > references to setting the number of slots for parallel jobs, not so
> > much for serial SMP jobs. I woke up this morning with one node  
> having
> > three 8 CPU jobs and an 18 load factor! ;)
> >
> > Would it be as simple as adding -l slots=x to the users  
> submission script?
> >
> > I also see a lot of references about configuring the slots, ideally
> > each slot on my machine would be 1 CPU and 1GB, where would this be
> > configured, complexes?
> >
> > Finally on the "Managing the scheduler" wiki page is states: "The
> > scheduler looks for queue instances on the least-loaded hosts that
> > meet the resource requirements of the first job in line." Is  
> there any
> > way to change this to the scheduler looks for the most-loaded host
> > that meets the resources requirements. I would like nodes to fill up
> > instead of everything being spread out, or is this a bad idea?
> >
> > I have been reading through the documentation, it is fantastic but
> > slightly overwhelming for a new user.
> >
> > Thanks for your help,
> > JF Prieur
> > Research assistant for Dr. Guillaume Lamoureux,
> > Department of Chemistry and Biochemistry,
> > Concordia University, Montreal, QC, CANADA
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do? 
> dsForumId=38&dsMessageId=214219
> To unsubscribe from this discussion, e-mail: [users- 
> unsubscribe at gridengine.sunsource.net].


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list