[GE users] ipcs leftovers

Reuti reuti at staff.uni-marburg.de
Wed May 4 21:47:37 BST 2005


    [ The following text is in the "ISO-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

Only a short note, as I saw that the number of shared memory segements were 
even in your output:

you compiled MPICH --with-comm=shared and used a $TMPDIR/machines with:

node01
node01
node02
node02

will allocate shared memory per process, but will not use it. A 
$TMPDIR/machines of:

node01:2
node02:2

will allocte only one shared memory segment per node (and so I think use it). 
Maybe it's an option for you to recompile MPICH and your application without 
shared memory. Just test the speed impact.

Cheers - Reuti


Quoting lukacm at pdx.edu:

> Reuti,
> 
> 
> yes it was MPICH job. I finally cleansed it. I realized that only the owner
> f
> the job running cleanpics will erase its own flags and semaphore. Running
> cleanpics as root did not removed the f;lags for all jobs that have been
> executed and finnished. SO i guess it is solved for now
> 
> thanks
> 
> martin
> 
> Quoting Reuti <reuti at staff.uni-marburg.de>:
> 
> > Martin,
> >
> > the "cleanpics" is from the MPICH installation and it will simply clean
> all
> > the
> > ipcs stuff from the user who started "cleanpics" - nothing more. When a
> user
> > has an additonal job on a node - he/she may kill the other job also.
> >
> > There was the idea to catch such shmget() calls and delete them after the
> > job,
> > but I didn't found the time to put it in a proven form to put it on the
> list.
> >
> > Was this a MPICH job? - Reuti
> >
> >
> > Quoting lukacm at pdx.edu:
> >
> > > Hello,
> > >
> > > i am not sure this is related directly to SGE but as i know here is the
> > > utility
> > > cleanpics , this might be the right place to ask.
> > >
> > > After finnishing some jobs i did cluster-fork ipcs and saww that most
> of
> > > computing nodes have somehting like this:
> > >
> > > ------ Shared Memory Segments --------
> > > key        shmid      owner      perms      bytes      nattch    
> status
> > > 0x00000000 458752     submitter 600        4194304    0
> > > 0x00000000 524290     submitter 600        4194304    0
> > > 0x00000000 557059     submitter 600        4194304    0
> > > 0x00000000 589828     submitter 600        4194304    0
> > >
> > > ------ Semaphore Arrays --------
> > > key        semid      owner      perms      nsems
> > > 0x00000000 3604480    apache    600        1
> > > 0x00000000 3637249    apache    600        1
> > > 0x00000000 9863170    submitter 600        10
> > > 0x00000000 9895939    submitter 600        10
> > > 0x00000000 9928708    submitter 600        10
> > > 0x00000000 9961477    submitter 600        10
> > >
> > >
> > > now there is much more of those submitter allocated semaphores. I tried
> to
> > > run
> > > cleanpics but nothing happens. The only result i saw was by directly
> > > applying
> > > ipcrm but it takes ages. Is there another way how to easily remove
> those
> > > semaphores and shared memory segs?
> > >
> > > thank you
> > >
> > > martin
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> > > For additional commands, e-mail: users-help at gridengine.sunsource.net
> > >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> > For additional commands, e-mail: users-help at gridengine.sunsource.net
> >
> >
> >
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list