[GE users] scheduler crashes on applying qrs for queued job?

Reuti reuti at staff.uni-marburg.de
Tue Sep 18 20:55:41 BST 2007


Hi Andreas,

Am 18.09.2007 um 17:39 schrieb Andreas.Haas at Sun.COM:

> On Mon, 17 Sep 2007, Reuti wrote:
>
>>>> Resource quotas are targeting consumable and fixed complexes for  
>>>> now. Making h_rt consumable is not really an option, but it  
>>>> would be possible to define two queues on @pvmhosts with  
>>>> different user_lists set. The time you set there (in the queue  
>>>> definition) for h_rt in one of them will also be enforced. Maybe  
>>>> you have to limit the total slot count also in the exechost  
>>>> definition, as you habe now (at least) two queues per machine.
>>> I see no indication Henk actually made h_rt a consumable, so the  
>>> case should work.
>> This I missed here :-/ After arriving home and rethinking about  
>> it: wouldn't this mean to have a limit per job what he requests?
>>
>>>>> limit        users testproject hosts @pvmhosts to h_rt=600
>
> It is a static limit that is applied on all jobs of user  
> testproject for the @pvmhosts. Though it is per job, but this is  
> true for all non-consumable (= static) limits.

okay, I see. I wasn't aware of it.

> ... only deficiency is that this limit is not (yet) enforced by  
> execd's since resource quota limits generally are not propagated to  
> execd during job delivery :-o

But AFAICS the h_rt would only be checked if it's requested in qsub.  
At least this is happening for now as I just tried with an INT (not  
consumable) fixed value set on some nodes. If it's not requested, it  
can run on any machine - independend whether it's set to any value  
for a particular machines in the resource quota set or not. But if  
it's requested in qsub, then it must be requested with the specified  
value (or lower in my case of relation in the complex definition <=)  
in the resource quota defintion for a user for this machine/ 
hostgroup. This is what I expected.

The resource quotas are ckecked against the requests, but there isn't  
any way to honor the resource quotas *after* the job has been  
scheduled already (now suddenly acting as a limit). And IMO this  
wouldn't be good as long as they are intended to be quotas.


>> All users in testproject using @pvmhosts may request in total  
>> h_rt=600, or as {testproject} each user in this userlist. But a  
>> limit per job isn't implemented up to now - or did I miss it? I  
>> filled:
>>
>> http://gridengine.sunsource.net/issues/show_bug.cgi?id=2147
>
> As I understand it the #2147 jobs scope would aim on something  
> different. The idea of
>
>    limit        users testproject hosts @pvmhosts to h_rt=600
>
> is just to apply -l h_rt=600 only on those jobs from user testproject.

You mean to apply it, although it's not requested in qsub? If the  
user requests -l h_rt=... already, it should work as usual of course.

@Henk: did you try to request it in the qsub command?

> If h_rt were enforced it would allow to use different resource  
> limits for different users.

I see that it would be nice, but I would title this "resource limit  
set", not "resource quota set". As it could be put there indeed for  
easy handling, I would like to have a clear indication, whether it's  
a quota or a limit. Hence an entry in the resource quota set:

type quota | limit

in the defintion - the first checked *before* scheduling, and the  
second *after* scheduling (and even if it wasn't requested at all).  
While h_rt is normaly not consumable, making h_vmem consumable is a  
common configuration, so making all h_*/s_* values as limits  
automatically is not an option. The "type quota" would be the total  
of h_vmem (in case it is a consumable) for a e.g. a user in the  
cluster, and a second with "type limit" would be a per job limit for  
his jobs.

Applying the cluster wide total "quota" as limit to each job would  
make no sense - most likely it's more memory than in every single  
node. This would go again in direction of issue 2147 - a quota/limit  
per job, although it's consumable.


> Though this can already be
> done, but not without an additional queue.
>
>> and (http://gridengine.sunsource.net/issues/show_bug.cgi?id=2148)  
>> some time ago, which cover this.
>
> I agree defining an allowed value range for each job is related to  
> Henks use case, but #2148 goes beyond. Not sure how it could
> be weaved in, but with consumable resources you may want to specify  
> both a capacity and a value range. Maybe a new 'range' clause such as
>
>    range        to mycomplex=(2,10,2)
>    limit        hosts @pvmhosts to mycomplex=100
>
> could be a solution for this. According your range definition #2148  
> it would mean jobs may request only 2, 4, 6, 8, or 10 of mycomplex  
> and for @pvmhosts there is a total mycomplex limit of 100.

Agreed!

For now we would need two resource quota sets with my initial  
proposal: one for the range, one for the total quota.

-- Reuti


>
> Regards,
> Andreas
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list