[GE users] sge db experiences

magawake magawake at gmail.com
Thu Feb 12 04:48:42 GMT 2009


Glad you said "experiences". I am not a SGE expert like Reuti or Rayson, but I had some BAD  experiences with bdb. Go with spooling! BDB corrupts way too many times. A lesson learned.



> The biggest advantage of classic spooling is the ease of backup &
> recovery, but I/O is usually slow over NFS, so Berkeley DB spooling
> which performs less I/O operations would offer an advantage in
> performance.
> 
> However, with 5K jobs each, you are talking about 10K jobs per day.
> And I/O operations are performed during job submit, job start, and job
> end.
> 
> So some ballpark estimates:
> 
> 10000 jobs per day / 24 hours per day / 60 mins per hour / 60 secs per min
> 
> = 0.12 jobs per second
> 
> Job submission has a few extra I/Os needed, esp. for directory
> creation, job script spooling, etc... but even so the number of I/Os
> needed by job related spooling should be less than a few per second. A
> modern NFS server can easily handle the I/O workload. And if NFSv4 is
> used (NFSv4 has a number of performance optimizations), and if the
> qmaster is connected via a high speed network and/or connected to same
> switch as the NFS server, then I believe classic spooling can handle
> even more jobs per day than just 10K.
> 
> Of course, job submissions usually occur during work hours. So you may
> need to monitor your cluster usage and do some detail estimates. And
> you can always setup a test cluster with a node (the number of nodes
> per cluster has much less impact on spooling performance, as most of
> the node information is not spooled during normal cluster operation)
> and see how your NFS server performs.
> 
> But BDB or classic is the same when you come to restore time if you
> don't have backups of the spooling data!!
> 
> Rayson
> 
> 
> 
> On 2/11/09, reuti <reuti at staff.uni-marburg.de> wrote:
> > > I wanted to hear about your experiences with the internal database,
> > > and what would you recommend for a cluster with 700+ exec hosts,
> > > bdb or classic text files (no bdb server in any case)
> >
> > With this number of nodes I would go for BDB for sure. There is a
> > discussion about it at:
> >
> > http://gridengine.info/2008/01/24/why-i-love-classic-spooling
> >
> > -- Reuti
> >
> >
> > >
> > > Right now I have two clusters that will join to a single master
> > > soon, and since I'm also upgrading to 6.2 this is a good time to
> > > rethink the db strategy.
> > >
> > > Currently I have flat file bdb on both clusters (nfs4 mounted so I
> > > can use shadow servers)
> > >
> > > Both clusters run over 5k jobs per day.
> > >
> > > Would using plain text have any advantages?
> > >
> > >
> > > Yuval Adar, Marvell Israel - Senior UNIX System Administrator
> > > Park Azorim, Kyriat Arie
> > > Petah Tikva, 49527, Israel
> > > Email: adary at marvell.com
> > > Office: +972.3.9703958 - OnNet: 705.3958
> > > Fax: +972.3.9704999
> > > Mobile: +972.54.2493958
> > > Web site: http://www.marvell.com
> > >
> > > This message may contain confidential, proprietary or legally
> > > privileged information. The information is intended only for the
> > > use of the individual or entity named above. If the reader of this
> > > message is not the intended recipient, you are hereby notified that
> > > any dissemination, distribution or copying of this communication is
> > > strictly prohibited. If you have received this communication in
> > > error, please notify us immediately by telephone or by e-mail and
> > > delete the message from your computer.
> > >
> >
> > ------------------------------------------------------
> > http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=103471
> >
> > To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].
> >

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=103730

To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].



More information about the gridengine-users mailing list