[GE users] jobs killed - job ... exceeds job hard limit "h_vmem" of queue ...

vinc17 vincent-sge at vinc17.org
Wed Aug 12 15:30:02 BST 2009

    [ The following text is in the "utf-8" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some characters may be displayed incorrectly. ]

On 2009-08-12 15:01:17 +0200, reuti wrote:
> Am 12.08.2009 um 02:56 schrieb vinc17:
> > Jobs are often killed with an error like:
> >
> >   job 15476 exceeds job hard limit "h_vmem" of queue
> >   "cas at volla.lip.ens-lyon.fr" (540127232.00000 > limit: 
> > 536870912.00000)
> is it intended that it uses just a little bit more, as you request  
> 540 MB in Maple?

I'm not sure I understand your question. I don't request anything
in Maple. Everytime the job is killed for this reason, Maple has
just been started, so that it takes very little memory (unless
there's a bug I'm not aware of, but the low memory used by my Perl
script and by Maple seems to be confirmed by the "ps" output). Said
otherwise, the value 540127232.00000 is IMHO incorrect. Sometimes
the value is much higher, e.g.

  08/11/2009 12:16:37|execd|volla|W|job 15496 exceeds job hard limit "h_vmem" of queue "cas at volla.lip.ens-lyon.fr" (732340224.00000 > limit:536870912.00000) - sending SIGKILL

What I don't understand is how execd finds these huge values such as
540127232.00000 or 732340224.00000.

Note: this problem occurs *only* when my Perl script starts Maple
and before the script has the time to ask any computation to Maple.


Vincent Lef?vre <vincent at vinc17.org> - Web: <http://www.vinc17.org/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.org/blog/>
Work: CR INRIA - computer arithmetic / Arenaire project (LIP, ENS-Lyon)


To unsubscribe from this discussion, e-mail: [users-unsubscribe at gridengine.sunsource.net].

More information about the gridengine-users mailing list