[GE users] per-host vs. per-queue consumable

isakrejda isakrejda at lbl.gov
Tue Nov 3 17:36:59 GMT 2009


I did as Reuti suggests, defined a parallel env and
with allocation_rule $pe_slots and it's working quite well for
threaded jobs for us. I named the pe single(as the threaded job runs on 
a single nodes)
qconf -sp single
pe_name            single
slots              100000
user_lists         NONE
xuser_lists        NONE
start_proc_args    /bin/true
stop_proc_args     /bin/true
allocation_rule    $pe_slots
control_slaves     FALSE
job_is_first_task  TRUE
urgency_slots      max
accounting_summary TRUE

for submission users add -pe single <number of threads>

slots of 16 would have been probably enough - that's how many slots per 
node I have.
I just copied a suggestion and it works. I am not sure the other params 
of the pe
definition could not be set better. But as is it works...


reuti wrote:
> Am 03.11.2009 um 10:08 schrieb murple:
>> reuti wrote:
>>> Hi,
>>> Am 02.11.2009 um 10:05 schrieb murple:
>>>> to account for used cores I have added a per-host consumable. This
>>>> mostly works. But now I need to bypass this check for some jobs.
>>>> Is it possible to define an additional queue for these jobs and  
>>>> "give"
>>>> this queue more of the complex? Or is there another way of bypassing
>>>> this resource check?
>>> why did you set up a custom complex for the slots? This can be setup
>>> in an RQS (resource quota set) which you could adjust for certain
>>> users or hosts.
>> The solution using a complex is what this list suggested.
> No, just define a PE and request the necessary number of slots (and  
> for SMP job with allocation_rule $pe_slots). No custom complex  
> necessary.
>> Or at least
>> what I understood when I asked how to account for users running
>> multithreaded programms. I don't know anything about RQS. Do they  
>> exist
>> in 6.2?
> Yep.
> -- Reuti
>>> In fact, having a custom complex makes it quite easy to bypass:
>>> $ qsub -l mycores=0 ...
>> I just noticed that myself. Luckily all my users are well-behaved.
>> regards, Andreas
>> ------------------------------------------------------
>> http://gridengine.sunsource.net/ds/viewMessage.do? 
>> dsForumId=38&dsMessageId=224778
>> To unsubscribe from this discussion, e-mail: [users- 
>> unsubscribe at gridengine.sunsource.net].
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=224780
> To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list