[GE users] sge and license usage

Gavin Burris bug at sas.upenn.edu
Mon Oct 20 19:18:21 BST 2008


    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

This is exactly what I was looking for.  Thanks so much Reuti.


		'qconf -me node##' for all nodes and set
			load_scaling NONE
			complex_values slots=8
		'qconf -msconf' and set
			load_formula slots


Cheers.


Reuti wrote:
> Am 20.10.2008 um 16:45 schrieb Gavin Burris:
> 
>> Thank you for the link.  That looks promising for future releases.
>>
>> I think I may still be able to accomplish my goal by having one queue
>> per node.  Each of my nodes has 8 cores/processors/slots.
>>
>> So now my question is:  what is the best way to chain queues so they
>> fill up in series?  Fill 8 slots of node01.q, then node02.q, then
>> node03.q, etc.
> 
> You can use Stephan's "fill up" method:
> 
> http://blogs.sun.com/sgrell/entry/grid_engine_scheduler_hacks_least
> 
> ============================
> 
> If you have this working and more type of jobs in the cluster, you could
> enhance this by filling jobs of type A from one side of the cluster, and
> other types of jobs from the other side. For this you would need two
> queues with sequence number in opposite directions and setting the
> scheduler to sort by seqno. As you then might oversubscribe a host, due
> to having two queues per machine now, the slots must be limites either
> in an RQS or by assignnig the true slot count as complex_values per
> execution host.
> 
> The jobs must then also request either the queue they should go to, or
> better by having a custom BOOL complex as forced for the license limited
> jobs. Other jobs would then go automatically to the other queue without
> requiring any change, and the special jobs will need "qsub -l extra ..."
> suppose extra is a forced bool complex, attached only to the special queue.
> 
> -- Reuti
> 
> 
>>
>> Cheers.
>>
>>
>> Reuti wrote:
>>> Gavin,
>>>
>>> Am 19.10.2008 um 23:41 schrieb Gavin Burris:
>>>
>>>> I have a piece of software with a finite number of floating licenses.
>>>> Only one license is checked out per user per host.  That means if a
>>>> user
>>>> runs 8 jobs from the same node, they only count as one license checked
>>>> out.  The default behavior of Grid Engine, however, is to evenly
>>>> distribute these jobs across all nodes, which checks out and
>>>> consumes 8x
>>>> licenses.
>>>>
>>>> What would be the best way to queue jobs so that they are grouped
>>>> together?  How can I configure a queue to fill a node to capacity
>>>> before
>>>> sending jobs to the next node?
>>>
>>> for now there is no way to implement it in a good way right now, but
>>> AFAIK it's already foreseen for the nexet major release:
>>>
>>> http://gridengine.sunsource.net/servlets/ReadMsg?list=users&msgNo=26060
>>>
>>> The only reliable way for now is to make host-locked licenses out of the
>>> floating ones, means to attach it as a feature to certain hosts and
>>> request these hosts/hostgroup in the qsub.
>>>
>>> -- Reuti
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list