[GE users] Job remain in qw state

reuti reuti at staff.uni-marburg.de
Mon Nov 29 13:30:58 GMT 2010

Am 29.11.2010 um 13:30 schrieb adarsh:

> Dear all,
> Thanks for U'r replies, now i Have successfully integrated Hadoop with sge such that my ./qhost -F | grep hdfs command shows
> all data paths.
> Now when I ran simple wordcount job, my job remain at qw state.
> Logs in Execution hosts says :
> 11/29/2010 16:47:34|  main|ws37-user-lin|E|shepherd of job 1.1 exited with exit status = 27
> 11/29/2010 16:47:34|  main|ws37-user-lin|E|can't open usage file "active_jobs/1.1/usage" for job 1.1: No such file or directory
> 11/29/2010 16:47:34|  main|ws37-user-lin|E|11/29/2010 16:47:34 [0:9462]: unable to find shell "/bin/csh"

If you didn't install the csh, you will most likely want:

$ qconf -sq all.q
shell                 /bin/sh
shell_start_mode      unix_behavior

while the first entry will only be honored for "posix_compliant" setting of the second entry.


If you don't want to change the queue's setting, you can also submit your jobs with:

$ qsub -S /bin/sh ...



The job might now be in error state (check `qstat`), and you have to issue `qmod -cj 27`

-- Reuti

> How to get rid of this.
> Whether scp accounting file to all nodes is sufficient or we must have mount /default/common on NFS.
> I simple copied it to all execution hosts.
> Thanks in Advance
> Adarsh Sharma
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=300212
> To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list