[GE users] Job remain in qw state

adarsh adarsh.sharma at orkash.com
Mon Nov 29 12:30:18 GMT 2010


Dear all,

Thanks for U'r replies, now i Have successfully integrated Hadoop with sge such that my ./qhost -F | grep hdfs command shows
all data paths.

Now when I ran simple wordcount job, my job remain at qw state.
Logs in Execution hosts says :

11/29/2010 16:47:34|  main|ws37-user-lin|E|shepherd of job 1.1 exited with exit status = 27
11/29/2010 16:47:34|  main|ws37-user-lin|E|can't open usage file "active_jobs/1.1/usage" for job 1.1: No such file or directory
11/29/2010 16:47:34|  main|ws37-user-lin|E|11/29/2010 16:47:34 [0:9462]: unable to find shell "/bin/csh"

How to get rid of this.
Whether scp accounting file to all nodes is sufficient or we must have mount /default/common on NFS.

I simple copied it to all execution hosts.

Thanks in Advance
Adarsh Sharma

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=300212

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list