[GE users] limiting the physical memory and virtual memory usage on a host

Reuti reuti at staff.uni-marburg.de
Wed Mar 8 20:21:37 GMT 2006


    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

Quoting Jinal Jhaveri <jajhaveri at lbl.gov>:

>
>
>> If you can live with 2 GB for all jobs, then you could just set 
>> h_vmem in the
>> queue definition to 2G. This will be then a hard limit for these jobs of
>> course. Then you don't need a consumable or default request.
>>
>> -- Reuti
>>
>>
> Thanks Reuti,
>
> The problem in this case is that, if a node belongs to lets say 3 
> queues and if each of them has 2 slots, then there will be 6 jobs 
> with a limit of 2GB , which totals to 12 gigs. But my system memory 
> is only 4gigs. Do you think there is any way out of these? Settings 
> slots=2 per node (not per queue) would be too overkilled, because 
> then it will allow only 2 jobs per node, even though the jobs aren't 
> memory intensive or cpu instensive. What do you guys do? Have you 
> limited slots on an exechost? How is the performance in that case?

I can only point you again to my original posting. If you don't like 
that users
have to request their estimated amount of memory, you can only limit it to two
jobs per node. Or: configuring different queues with different h_vmem 
hardcoded
in their definition (and asking the users to submit to a queue) is somehow the
PBS-style of doing the things. In SGE users have to request resources, and SGE
will chose the best suited queue for this job.

My users have to request virtual_free, which is set to 100M less and than the
physical installed memory in the exechost definition and made consumable (BTW:
in your qconf -mattr command you must give the nodename, not a queue name). As
they are fair, all is working well with the setup.

-- Reuti

>
> thanks
> --Jinal
>
>
>
>>> users to have to change their scripts and specifically ask for that 
>>> much free vmem). The reason I am asking all these questions is that,
>>> sometimes we have issues where multiple memory intensive jobs are 
>>> scheduled to the same node and due to that either they are 
>>> thrashing or some of them seg fault putting the node in error 
>>> state. I would like to avoid that situation.
>>>
>>> Thanks
>>> --Jinal
>>>
>>>
>>>
>>>
>>> Reuti wrote:
>>>
>>>> Jinal,
>>>>
>>>> similar discussions were on this list before:
>>>>
>>>> http://gridengine.sunsource.net/servlets/ReadMsg?listName=users&msgNo=10553 In addition you could make h_vmem consumable and give it also an inital 
>>>> value.
>>>> Jobs have of course to request both h_vmem and virtual_free in 
>>>> this case. I
>>>> think, it should be sufficient to use just one of them: h_vmem for 
>>>> enforced
>>>> limits, virtual_free for fair users.
>>>>
>>>> HTH - Reuti
>>>>
>>>>
>>>> Quoting Jinal Jhaveri <jajhaveri at lbl.gov>:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I would like to limit the total amount of physical memory as well 
>>>>> as virtual memory, all the jobs collectively use on an exechost. 
>>>>> Any suggestions on how to do that? I know that I can't change the 
>>>>> load_values, so I am pretty sure there should be some way of 
>>>>> doing it either via complex or something else.
>>>>>
>>>>> thank you very much for your help.
>>>>>
>>>>> --Jinal
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>>>
>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
>> For additional commands, e-mail: users-help at gridengine.sunsource.net
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
>



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list