[GE users] Interactive / PE
miomax_ at hotmail.com
Thu Jul 22 10:40:09 BST 2010
[ The following text is in the "iso-8859-1" character set. ]
[ Your display is set for the "ISO-8859-10" character set. ]
[ Some special characters may be displayed incorrectly. ]
I have a few questions regarding functionnalities of SGE. I have assumed a lot while reading the doc, and I guess it needs to be corrected now.
I only have a month left to finish my configuration. It doesn't need to be optimized, but it has to be easy to use and to mod in the future, and should require minimal maintenance.
1/ I read somewhere ( I can't find it anywhere now ...) that interactive sessions were much better left alone on a node. Several interactive sessions are ok on the same node, but mixing interactive + seq. or parallel would diminish perfomance. Is this true and should I take this into account for a recent cluster?
2/ If a user choses to work 'out of core' (i.e. on the disk), can another heavy (but a bit less so it works incore) be launched on the same nodes ? How does the user specify he wants to work out of or in core from the qmon?
3/ I am configuring parallel environments :
- Is it the users responsability to use MPI / OpenMP, irregarding the selected parallel environment ? My parallel queues are set up 1*16 nodes, 2*8 and 4*4. Do I only need 3 PE (one for each 'type' of queue) , in which only the slots number will change and where slots = available cores? Then the users submits his job with OpenMP or OpenMPI (and maybe sometimes with OpenMP in OpenMPI)
- Is it bad to have an odd (uneven) number of slots for parallel application, as they work by pairs ? I want to leave one core free to run the execd on each server in case it takes too much %CPU.
- As for the allocation rule, I think $round_robin is the fair solution, even if slots could be 'lost', it's still better than $fill_up (for MPI ofc, OpenMP would use $fill_up).
- Also, as my parallel queues are superordinated to all other queues which they share nodes with, max_reservation is not used. I think control_slaves has to be set to true for a tight integration.
Is this set up correct ?
I found much doc for OpenMPI, but much less for OpenMP. Is there a reason for this ?
Thanks for having read,
Le nouveau Messenger arrive ! Téléchargez-le gratuitement et découvrez ses nouvelles fonctionnalités<http://clk.atdmt.com/FRM/go/244627952/direct/01/>
More information about the gridengine-users