[GE users] queue concumable resources

Alan Rogers alan.rogers at rsp.com.au
Fri May 27 02:37:07 BST 2005


    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

Hi Reuti,

thankyou for the response.

(comments below)

Reuti wrote:

>Alan,
>
>what you see is the intended behavior. The complex will be seen (and counted) 
>for each instance of the cluster queue individually. It's very similar to 
>setting the complex_values in the definition of the execution host if you have 
>only one queue per host. But the latter one will be shared by all queue 
>instances on this host, the former one just inside the queue instance.
>
>I also saw your other post regarding the mix of floating and node-locked 
>licenses. Are you using a license server in any form?
>  
>
Yes we are using a flexlm license server.  And we were planning on using 
a load_sensor to keep this value up to date (whether it be a load value 
or a consumable resource).

>This seems to be another case of a request to limit the amount of used queue 
>instances in a cluster queue (i.e. the allowed number of used instances at once 
>would be the amount of floating licenses). There is already an RFE for it. This 
>way you could setup one cq for the node-locked machines (this is just the 
>hostgroup for the cq as now) and a cq for the floating licenses (limiting the 
>number of used queue instances to your number of licenses).
>
>  
>
Is this possible with GE 6.0u4 ? or must I wait for the RFE (can you 
point me to info on this RFE?)

The situation that is giving us trouble at the moment:

* we have 0 floating licenses available (they are all in use, either by 
jobs on the grid or by users running the software locally).
* there are OSX machines (essentially they have free node-locked licenses).

In this situation the jobs should be able to run on the OS X machines 
(provided they are not already busy).

We have been unsuccessful in implementing this functionality thus far.

sorry if this is a bit vague..... feeling a little delicate this morning

cheers,
alan





>Cheers - Reuti
>
>
>Quoting Alan Rogers <alan.rogers at rsp.com.au>:
>
>  
>
>>I have a question about queue consumable resources.
>>
>>When I do the following to a queue domain:
>>
>>qconf -mattr queue complex_values shake=1 shake.q@@r2
>>
>>shake is my consumable complex with the following settings:
>>
>>shake               shk        INT         <=    YES         YES        
>>0        0
>>
>>I would expect the resource to be shared between all hosts in the queue 
>>domain shake.q@@r2.
>>
>>ie.  when a job runs on one of the hosts in that domain, using shake=1 
>>resource.  Then I would expect shake=0 for all hosts in the queue domain.
>>
>>However I see shake=0 only on the host that runs the job, and shake=1 on 
>>all of the other hosts in the queue domain.
>>
>>to me this seems like incorrect behaviour?
>>
>>am I wrong?
>>
>>thanks,
>>alan
>>
>>-- 
>>alan rogers
>>"l'esprit d'escalier"
>>software developer / system administrator - alan.rogers at rsp.com.au
>>----------------------------------------------------------------
>>rising sun pictures - www.rsp.com.au
>>redefining visual effects delivery
>>----------------------------------------------------------------
>>direct line +61 2 9338 6486
>>mobile ph +61 408 846 098
>>----------------------------------------------------------------
>>our adelaide phone number & address has
>>changed, please update your records..........
>>----------------------------------------------------------------
>>adl ph +61 8 8400 6400 - fx +61 8 8400 6401
>>level 1, 133 gouger street, adelaide, 5000
>>----------------------------------------------------------------
>>syd ph +61 2 9338 6400 - fx +61 2 9338 6401
>>15/16 charles street, redfern, sydney, 2016
>>----------------------------------------------------------------
>>rising sun research - http://research.rsp.com.au
>>---------------------------------------------------------------- 
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>For additional commands, e-mail: users-help at gridengine.sunsource.net
>>
>>    
>>
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>For additional commands, e-mail: users-help at gridengine.sunsource.net
>  
>

-- 
alan rogers
"l'esprit d'escalier"
software developer / system administrator - alan.rogers at rsp.com.au
----------------------------------------------------------------
rising sun pictures - www.rsp.com.au
redefining visual effects delivery
----------------------------------------------------------------
direct line +61 2 9338 6486
mobile ph +61 408 846 098
----------------------------------------------------------------
our adelaide phone number & address has
changed, please update your records..........
----------------------------------------------------------------
adl ph +61 8 8400 6400 - fx +61 8 8400 6401
level 1, 133 gouger street, adelaide, 5000
----------------------------------------------------------------
syd ph +61 2 9338 6400 - fx +61 2 9338 6401
15/16 charles street, redfern, sydney, 2016
----------------------------------------------------------------
rising sun research - http://research.rsp.com.au
---------------------------------------------------------------- 




More information about the gridengine-users mailing list