[GE users] qstat Eqw seems to be related to NFS slowness...

dmikh Dmitry.Mikhailichenko at Sun.COM
Tue Aug 25 11:10:09 BST 2009


22.08.09 00:28, mbay2002 wrote:
> We've got a cluster where /home is a ZFS filsystem, and the rest of our
> filesystems are lustre.
>
> What I've been noticing is that when users submit array jobs through
> qsub (it seems they have to have 200 or more instances for this to
> occur), some of the jobs error out (qstat shows "Eqw").
>
> When I inspect the error, it shows that /home does not exist on some of
> the nodes.  /home is automounted, so it doesn't appear until a user
> connects to a node.  What I've found is that if I clear the error
> (qmod -cj <job-number> ) the jobs usually take off and complete.
>
> So, what I'm thinking is that when roughly 200 (or more) array jobs are
> submitted, NFS / automount can't simultaneously mount /home/<user>
> across all the nodes.  MOST of them work, but a few error out.  Perhaps
> it is more appropriate to post this to a ZFS forum, but, has anyone else
> seen this behavior, and if so, is there a fix?
>   

I saw similar problems. You can look at my previous post 
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=73107
I also believe it is connected to automounter, when many jobs are 
started simultaneously. As a workaround we modified job scripts to make 
several attempts to chdir to shared working directory.

Thanks,
Dmitry

> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=213530
>
> To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].
>
>

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=214155

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list