[GE users] GAMESS Howto

Wheeler, Dr M.D. mdw10 at leicester.ac.uk
Mon Jan 17 11:31:48 GMT 2005


    [ The following text is in the "iso-8859-1" character set. ]
    [ Your display is set for the "ISO-8859-10" character set.  ]
    [ Some special characters may be displayed incorrectly. ]

Hi Again,

I am about to start getting gamess up and running, however, I ahave quick couple of questions about setting up an appropriate pe in SGE.

The pe I have for molpro is a copy of the mpich pe that comes with my distribution of SGE and is as follows

pe_name           molpro
queue_list        all
slots             999
user_lists        NONE
xuser_lists       NONE
start_proc_args   /home/software/scripts/startmolpro.sh -catch_rsh $pe_hostfile
stop_proc_args    /home/software/scripts/stopmolpro.sh
allocation_rule   $fill_up
control_slaves    TRUE
job_is_first_task FALSE

I ahve a couple of questions taht I can;t seem to find answers to, the last three lines differ from your gamess pe.  Could you explain what they do and which is better for molpro/gamess?

Thanks,

Martyn

----------------------------------------------
Dr. Martyn D. Wheeler
Department of Chemistry
University of Leicester
University Road
Leicester, LE1 7RH, UK.
Tel (office): +44 (0)116 252 3985
Tel (lab):    +44 (0)116 252 2115
Fax:          +44 (0)116 252 3789
Email:        martyn.wheeler at le.ac.uk
http://www.le.ac.uk/chemistry/staff/mdw10.html
 

> -----Original Message-----
> From: Reuti [mailto:reuti at staff.uni-marburg.de]
> Sent: 22 December 2004 18:17
> To: users at gridengine.sunsource.net
> Subject: [GE users] GAMESS Howto
> 
> 
> Hi all,
> 
> this is not about games, but people not in touch with 
> chemistry may easily feel 
> to think so.
> 
> Since there was already two times a request on the GAMESS 
> mailing list, maybe I 
> should put some stuff here to get it working with SGE. One 
> prerequisite is to 
> have GAMESS already compiled with proper settings in compddi 
> for the size of 
> the cluster, updated SHMMAX to the installed memory and 
> working rsh/ssh 
> connection. Platform is Linux on dual nodes.
> 
> Next thing is to create a PE which will generate the 
> necessary scratch 
> directories on each node (which must have the same name on 
> all nodes, so the 
> SGE created ones on all nodes of a parallel job can't be 
> used). Then I found it 
> more convenient, to create a list with nodes in the requested 
> style for 
> ddikick.x, than to create a PBS_NODEFILE style list and use 
> the lengthy rungms 
> script. Having a proper list of nodes, you can size down 
> rungms considerable to 
> the setting of some variables and start the program.
> 
> So, create a PE with the entries (still 5.3, but easy to port to 6.0):
> 
> $ qconf -sp gamess
> pe_name           gamess
> queue_list        para00 para01 para02 para03 para04 para05 
> para06 para07 
> para08 para09
> slots             20
> user_lists        NONE
> xuser_lists       NONE
> start_proc_args   /usr/sge/gamess/startgamess.sh -catch_rsh -unique 
> $pe_hostfile
> stop_proc_args    /usr/sge/gamess/stopgamess.sh $pe_hostfile
> allocation_rule   %round_robin
> control_slaves    TRUE
> job_is_first_task TRUE
> 
> The two necessary scripts you find attached to this posting 
> along with the rsh 
> (I commented out the echo commands at the end).
> 
> The next thing to do is to create a submit script, which can 
> be used by all 
> users and handle all necessary setups. This you can find in 
> subgms (we use 
> similar scripts for Gaussian and others). The software is located in 
> /opt/chemsoft in appropriate subdirectories to handle also 
> different versions 
> of the same program, e.g. Gamess_22_NOV_2004_R1. The submit 
> script will now 
> first create a script special for the intended job, and save 
> this in ~/cmd of 
> the user and submit it from there. This submit script is 
> really short and set 
> just three varaiables to easy the sourcing of the 
> subgms_export file (setting 
> of necessary environment variables for GAMESS). Then you only 
> have (optional) 
> to unset these three variables again, copy the inputfile and 
> call the program.
> 
> All the scripts to be used by all users we store in the home 
> directory of the 
> special user "soft" in the subdirectory "scripts", therefore also the 
> subgms_export is located there. Since GAMESS need a 
> persistent directory for 
> some important files, this will be created in the home 
> directory of the user 
> and named ~/scr.
> 
> One other thing is, that it's possible to delete all the 
> (nearly) empty files 
> from stdout and stderr, since the real output will be written 
> to a file created 
> by the script. This is also handled in the stopgamess.sh 
> script (I do a similar 
> thing in an epilog for the serial jobs, if these two files 
> are empty anyway 
> after the job).
> 
> You may wonder about some set requests for the job:
> 
> - virtual_free is defined as consumable and set to the RAM in 
> the machines 
> minus 100 MB.
> 
> The others are part of a defined complex:
> 
> $ qconf -sc custom_host
> #name            shortcut   type   value           relop 
> requestable consumable 
> default
> #-------------------------------------------------------------
> -----------------
> --------
> cpu_type         ct         STRING none            ==    YES  
>        NO         
> none 
> cpu_usage        cu         INT    0               <=    YES  
>        YES        
> 3    
> scratch_disk     sd         MEMORY 0               <=    YES  
>        YES        
> 2G  
> 
> - scratch_disk is just to avoid a fillup of the local disk in 
> the nodes, in 
> case that there is more than one job running. For these 
> machines it's limited 
> to 16 GB anyway. So it maybe with larger harddisks that you 
> can disregard it at 
> all. Otherwise set it in the node definition to the size of 
> the built in disk.
> 
> - cpu_usage is an attempt to use unused cpu time in a 
> parallel job, since not 
> all of the time the parallel jobs are really calculating on 
> all nodes (with the 
> type of software we use). So in the node definition we set 
> cpu_usage to 6 
> (could be 3 for single CPU nodes) and a serial job costs 3 
> points, a parallel 
> task 2 points and a (I call it so) background job 1 point - 
> but always only a 
> maximum of two of each type (by two slots in each type of 
> queue). The serial 
> and parallel jobs have a nice value of 0 set in the queue 
> definition, and the 
> queue used for the background job of 19. Although a user 
> could bypass these 
> settings, I'm in the lucky position that my users are fair 
> and don't do so.
> 
> Although GAMESS is using shared memory, we don't have any 
> problems with 
> remaining segments for now.
> 
> 
> Cheers - Reuti
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe at gridengine.sunsource.net
For additional commands, e-mail: users-help at gridengine.sunsource.net




More information about the gridengine-users mailing list