[GE users] checking mount points or any other user defined attributes

reuti reuti at staff.uni-marburg.de
Thu Nov 25 09:42:53 GMT 2010


Am 24.11.2010 um 13:06 schrieb llikethat:

> <snip> 
> > If i'm setting the mount points in the complex, and configuring it on the node configuration, how does SGE undestand it? Will SGE check for the presence of these mount points before submitting the job to the node?
> 
> No, it's just a fixed string - SGE doesn't know what it is, and it's also not necessary. Normally I would assume that you don't change mount points twice an hour. So they are fixed bound to machines. There is nothing to check for SGE.
> 
> You could nvertheless setup a load sensor, which will report the string of found mount points in a generic way for all machines. In a format as described (to avoid that a substring matches a found mount point) you can then fill the values automatically.
> 
> -- Reuti
> 
> 
> Hi,
> 
> Oh ok, now i got to understand it better. Instead of doing a load sensor. What if i do a prolog which will be the mount commands to mount the NFS shares before submitting the job. Will this work?

You would have to tell the job, which particular mount points are necessary for this job. When I get you right, you don't want to mount all mount points all the time.

A place for such information (which is unrelated to SGE in any way), are is the job context. This is so called meta-informastion and not used by SGE in any way. But you on your own can set an access this information:

$ qsub -ac MOUNTS=/nfs/app1,/nfs/app2 myjob.sh

Then you can access this information with `qstat -j $JOB_ID` in the line with the entry "context:". It may be necessary to run the prolog and epilog then as root, which can be achieved by prefixing the path to the script with root@/usr/sge/cluster/myprolog.sh

Pitfall: when you have more than one job on a node at a time, it might be necessary to check, whether any other job running on this particular node is still using one mount point which you would like to unmout in an epilog. To avoid in addition a race condition, the clean solution in this case would even be to disable the queue instance in the epilog, check for other jobs using the mount point, unmount them, enable the queue instance again.

-- Reuti

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=298660

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list