[GE users] consumables(licenses) and paralell environment not working...

Reuti reuti at staff.uni-marburg.de
Wed May 16 17:24:36 BST 2007


    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

Am 16.05.2007 um 17:19 schrieb Lönroth Erik:

>
>>> You are aware, that:
>
>>>>> #$ -l fluent_all=1
>>>>> #$ -l fluent_par=8
>
>>> will be multiplied by 8, as you requested 8 slots?
>
>>> -- Reuti
>
> Oh! I didnt know that... How should I specify that I need:
>
> 1 fluent_all
> 8 fluent_par

You can try with:

0.125 fluent_all
1 fluent_par

-- Reuti


> In a correct way? Have I made a misstake in specifying 4 slots per  
> que and "UNDEFINED" #processors in my que definition?
>
> ... Am I making any sense to you in my error?
>
> Thanx for your input Reuti anyway, priceless!
>
> /Erik
>
> -----Original Message-----
> From: Reuti [mailto:reuti at staff.uni-marburg.de]
> Sent: den 16 maj 2007 13:51
> To: users at gridengine.sunsource.net
> Subject: Re: [GE users] consumables(licenses) and paralell  
> environment not working...
>
>
> Am 16.05.2007 um 13:03 schrieb Lönroth Erik:
>
>> Yes, they are.
>>
>> I have added the pe:s to the queues.
>>
>> I have 2 queues:
>>
>> short.ts102.q (all hosts from subcluster "ts102")
>>    fluent_ts102_pe
>>
>> short.ts103.q (all hosts from subcluster "ts103")
>>    fluent_ts103_pe
>>
>> This might be some other foobar thing I have done, maybve you can
>> help me point out a good error search path? Logfiles to look in etc.
>
> You are aware, that:
>
>>> #$ -l fluent_all=1
>>> #$ -l fluent_par=8
>
> will be multiplied by 8, as you requested 8 slots?
>
> -- Reuti
>
>
>>
>> /Erik
>>
>> -----Original Message-----
>> From: Reuti [mailto:reuti at staff.uni-marburg.de]
>> Sent: den 16 maj 2007 12:34
>> To: users at gridengine.sunsource.net
>> Subject: Re: [GE users] consumables(licenses) and paralell
>> environment not working...
>>
>>
>> Am 16.05.2007 um 09:05 schrieb Lönroth Erik:
>>
>>> My pe looks like this:
>>>
>>> # Version: 6.0u8
>>> #
>>> # DO NOT MODIFY THIS FILE MANUALLY!
>>> #
>>> pe_name           fluent_ts103_pe
>>> slots             9999
>>> user_lists        NONE
>>> xuser_lists       NONE
>>> start_proc_args   /opt/gridengine/mpi/myrinet/startmpi.sh -
>>> catch_rsh $pe_hostfile /opt/mpich/myrinet_mx2g/gnu/bin/mpirun.ch_mx
>>> stop_proc_args    /opt/gridengine/mpi/myrinet/stopmpi.sh
>>> allocation_rule   $fill_up
>>> control_slaves    FALSE
>>> job_is_first_task FALSE
>>> urgency_slots     min
>>>
>>> As you see I'm using myrinet_mx and apart from that I think this
>>> should work... My queues are setup with 4 slots and "UNDEFINED"
>>> processors.
>>
>> The PEs are also attached to the queues?
>>
>> -- Reuti
>>
>>
>>> /Erik
>>>
>>>
>>> -----Original Message-----
>>> From: Dan.Templeton at Sun.COM [mailto:Dan.Templeton at Sun.COM]
>>> Sent: den 15 maj 2007 16:51
>>> To: users at gridengine.sunsource.net
>>> Subject: Re: [GE users] consumables(licenses) and paralell
>>> environment not working...
>>>
>>>
>>> Erik,
>>>
>>> The naive assumption would be that your PE has the slots attribute
>>> set to 0.
>>>
>>> Daniel
>>>
>>> Lönroth Erik wrote:
>>>>
>>>> I have problems getting jobs through in my SGE 6u8 environment.
>>>>
>>>> I have setup a license load_sensor as decribed by
>>>> http://bioteam.net/dag/sge-flexlm-integration/
>>>> but when I submitt jobs and specify:
>>>>
>>>> #$ -l fluent_all=1
>>>> #$ -l fluent_par=8
>>>>
>>>> I get from the scheduler:
>>>>
>>>> qstat -j 113
>>>> ...
>>>> ...
>>>> ...
>>>> script_file:                fluent.job
>>>> parallel environment:  fluent_*_pe range: 8
>>>> scheduling info:            cannot run in PE "fluent_ts102_pe"
>>>> because
>>>> it only offers 0 slots
>>>>                             cannot run in PE "fluent_ts103_pe"
>>>> because
>>>> it only offers 0 slots
>>>>
>>>> If I dont include the "-l fluent_xxx" resources, it works as  
>>>> normal.
>>>>
>>>> What am I doing wrong? I recall this has worked before, but now it
>>>> seems not to.
>>>>
>>>> /Erik
>>>>
>>>
>>> -------------------------------------------------------------------- 
>>> -
>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>
>>> -------------------------------------------------------------------- 
>>> -
>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list