[GE users] PE configuration for 4, 8 and new 12 core nodes

reuti reuti at staff.uni-marburg.de
Mon Nov 22 12:35:37 GMT 2010


    [ The following text is in the "utf-8" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some characters may be displayed incorrectly. ]

Am 22.11.2010 um 12:29 schrieb rems0:

> Hi reuti,
> 
> thanks for your answer! Below more questions ...
> 
> 
> On 11/18/2010 07:46 PM, reuti wrote:
>> Am 18.11.2010 um 17:55 schrieb rems0:
>> 
>>> Hi all,
>>> 
>>> This is on openSUSE 11.3 64 bit, with GE 6.2u5.
>>> Up to now we had nodes with 4 and 8 cores.
>>> We almost only run parallel jobs on 4, 8, 16 or any multiple of 4 slots.
>>> We have implemented this by defining PEs with a fixed allocation number
>>> of 4 or 8 slots.
>> 
>> When you want the least amount of nodes and always fill them up, what about getting rid of all the PEs and have only one with $fill_up, and in addition submit all jobs with a complex request of an exclusive boolean complex. Whether this will work, depends on the slots you request and the number of machines you have, as now always complete nodes are necessary to make up the total slot count. This might or might not be what you want.
> 
> If I set only one PE with $fill_up, how can I control/configure to which 
> nodes the jobs get scheduled?

Unfortunately SGE will only use the PE defined allocation rule, and it can't be configured in a more sophisticated way.


> Lets say there are free nodes with 4, 8 and 12 free slots.
> If we submit a job with asking for 8 slots, will it go to a node with 8? 

Only way to do it I see is:

-masterq "*@@cores8" -l exclusive


> The same with a 4 or 12 slot job, will those be scheduled to a 4/12 
> slots node? (always assuming there are 4/8/12 slots nodes free)

-masterq "*@@core12" -q "*@@cores4" -l exclusive

Well, in u5 is a bug, which will add the @cores12 request also to the slave tasks.

As I said, whether this will suite your needs, depends on the jobs you have in your workflow and the available machines. It can be, that it's better to stick with many PEs.


> And if we submit a 16 core job, how can we setup that we prefer this one 
> to be scheduled first to 2x8 slots nodes, then to 2x12 slots and last to 
> 4x4 slots?

To prefer it it's no easy, as there is no "soft master queue_list" like for normal "soft_queue_list".

-masterq "*@@core8" -l exclusive

-- Reuti


> Possible?
> 
> Or how will the scheduler "decide" without setting any preferences?
> 
> Thanks again,
> Richard
> 
> 
> -- 
> Richard Ems       mail: Richard.Ems at Cape-Horn-Eng.com
> 
> Cape Horn Engineering S.L.
> C/ Dr. J.J. Dómine 1, 5? piso
> 46011 Valencia
> Tel : +34 96 3242923 / Fax 924
> http://www.cape-horn-eng.com
> 
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=297603
> 
> To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].
>

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=297623

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list