[GE users] MPICH2 tight integration

reuti reuti at staff.uni-marburg.de
Thu Aug 13 18:28:49 BST 2009


Am 13.08.2009 um 19:04 schrieb skylar2:

> No joy on the daemonless smpd front. SGE successfully starts up all  
> the
> processes, with one line like this per slot:
>
> /net/gs/vol3/software/sge/bin/lx24-amd64/qrsh -inherit sage001 env
> PMI_RANK=0 PMI_SIZE=48 PMI_KVS=3B9DD9611EAE3B5240D7D4AE2DE
> 3BDDE PMI_ROOT_HOST=sage001.grid.gs.washington.edu PMI_ROOT_PORT=60437
> PMI_ROOT_LOCAL=0 PMI_APPNUM=0 /net/gs/vol3/software/mod
> ules-sw-test/hpl-mpich2/2.0/Linux/RHEL5/x86_64/bin/ 
> Linux_PII_CBLAS_gm/xhpl

Is this a Pentium-II library?

The best thing would be to start with the mpihello program to get it  
working with a simple program. Maybe there is some network address  
problem or whatever. As I said: is it working with 2/4/8... slots on  
1/2/4... nodes with mpihello?

-- Reuti


>
> But then it doesn't appear that any of the processes are in the  
> same MPI
> world:
>
> HPL ERROR from process # 0, on line 419 of function HPL_pdinfo:
>>>> Need at least 24 processes for these tests <<<
>
> This was with "-pe mpich2_smpd_rsh 24" and qstat showed 24 slots
> assigned to the job.
>
> -- 
> -- Skylar Thompson (skylar2 at u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S048, (206)-685-7354
> -- University of Washington School of Medicine
>
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do? 
> dsForumId=38&dsMessageId=212162
>
> To unsubscribe from this discussion, e-mail: [users- 
> unsubscribe at gridengine.sunsource.net].

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=212163

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list