[GE users] writing to a single log file

Dan Gruhn Dan.Gruhn at Group-W-Inc.com
Mon May 9 21:01:08 BST 2005


Chris,

If you are keeping your own log file (not the one created by SGE), you
can "lock" a file using standard commands in a script.  The basic tool
of this is linking a filename to another filename, which NFS guarantees
to be atomic.  Here is a simple script:

outputFile=/<NFS pathname>/common.log
lockFile=$outputFile.lock

# Do some work
...

# Wait until we get the log file
until ln $outputFile $lockFile >/dev/null 2>&1 
do
    sleep 1
done

# Update the log file
echo "<status information or whatever>" >$outputFile

# Remove our lock
rm -f $lockfile

If you are trying to add output to the SGE output file, use the separate
file based on job ID method that  Reuti, Daleand Rayson  have suggested.

Dan

On Mon, 2005-05-09 at 15:02, Chris Duke wrote:

> We would like to write the results from grid jobs into a custom file (we
> generate more output than just complete status). We have found that when
> we simply try and write to a log file (on a shared NFS mount), the jobs
> tend to write all over each other, and the output is unusable. We plan
> on scaling this to several hundred slots, so the potential conflicts are
> huge. Does anyone have any suggestions on how to accomplish this?
> 
> --chris
> 
> PS. Thanks to everybody who participates in this group. It's fantastic
> to see such an active community.
> 
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
> For additional commands, e-mail: users-help at gridengine.sunsource.net
> 



More information about the gridengine-users mailing list