[GE users] question about shadow master configuration

Ron Chen ron_chen_123 at yahoo.com
Tue Nov 23 05:20:40 GMT 2004


There are 2 main purposes of using a shared filesystem
with shadow master failover:

- lock file: the master needs to update the heartbeat 
  periodically so that the shadows would know that it 
  is still alive.

- when one of the shadows starts a new master, the new

  master needs to get the cluster information (queue
  definitions, user configurations), and also the job
  information from the files in the spool dir.

I think it may work if you use rcp/scp or other ways
to copy the local spool directory to a remote
location. Basically your shadow master needs to start
a script which copies the spool dir. to a remote
location periodically.

BTW, if you have NFS but just that your spool dir is
local, you can just use local "cp" (instead of
rcp/scp) to copy the files to the shared location, and
then shadow masters will pull the files from there.

Let us know what exactly you want to do so that we can
think of more hints (or hacks).

 -Ron

> in our configuration it is not, local to the head
node 
> and local on each compute node.



		
__________________________________ 
Do you Yahoo!? 
Meet the all-new My Yahoo! - Try it today! 
http://my.yahoo.com 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list