[GE users] Functional sharing

olson olson at mcs.anl.gov
Fri Mar 20 22:47:00 GMT 2009

aha, I may have solved my own problem. I had had  
max_functional_jobs_to_schedule set to 2000 during one of my earlier  
fiddling-with-scheduling episodes, and my guess from the name of the  
parameter is that that was allowing the dilution to happen. Reducing  
back to the default of 200 appears to be making the jobs from the  
large batch have an influence again.


On Mar 20, 2009, at 5:16 PM, olson wrote:

> We've been running our cluster with functional sharing turned on to
> implement fair sharing between two main applications and ordinary user
> jobs,and it has been working pretty well.
> However, a project has submitted a large number (2000 or so) jobs into
> the queue, and what this appears to have done is to have diluted the
> project's share of the tickets to a large enough extent that it is
> being starved from executing at all.
> How do we work around this problem? The large submission is entirely
> correct and expected, and the desired outcome is that the project's
> fraction of the cluster will work its way through the jobs.
> Thanks,
> --bob
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=138037
> To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net 
> ].


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list