[GE users] Moved to users list: Re: [GE dev] Reservation

olesen Mark.Olesen at faurecia.com
Thu May 20 10:14:50 BST 2010

On Thu, 2010-05-20 at 10:07 +0200, aeszter wrote:
> [Moved since I guess the users list is more appropriate now]
> Hello everyone,
> I would like some configuration advice. We have multiple queues per
> node, so we need some mechanism to prevent them from overloading a
> given node (e.g., an 8-core node should accept exactly eight
> processes, not eight from the "short" queue and another eight from
> "long"). In the past, we've used a "slots" complex on the nodes and an
> appropriate load_formula in sconf. However, I've just learned that
> putting queues into an alarm state is The Wrong Thing, and it will
> prevent reservations from working (see below).
> Is there a better way to ensure a processes <= CPU cores relationship?

You try using double subordinate queues.

>From our own configuration:

$ qconf -sq cfd
qname                 cfd
slots                 2
subordinate_list      cfd2=1

$ qconf -sq cfd2
qname                 cfd2
slots                 4
subordinate_list      cfd=1

The 'cfd' queue can use both CPUs, whereas the 'cfd2' queue can use both
cores of both CPUs.  If any slots are active on one queue, the other
queue is suspended and should not accept any jobs.

I have sometimes seen a race condition, but I can't reproduce it.
Otherwise this approach might work for you.


This electronic transmission (and any attachments thereto) is intended solely for the use of the addressee(s). It may contain confidential or legally privileged information. If you are not the intended recipient of this message, you must delete it immediately and notify the sender. Any unauthorized use or disclosure of this message is strictly prohibited. Faurecia does not guarantee the integrity of this transmission and shall therefore never be liable if the message is altered or falsified nor for any virus, interception or damage to your system.


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list