Tags:
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup --> ---+!! Sun Grid Engine on CMS Tier3 %REVINFO{"Revision $rev, $date $time"}% --- %TOC% --- This document describes the planning, installation, and basic configuration of SGE on the PSI Tier3 cluster. The advanced configuration and policies will be described in a separate document. ---+ Useful links * [[http://sourceforge.net/projects/gridscheduler/][Open Grid Scheduler]] - the open source continued development based on the last free SGE * [[http://wiki.gridengine.info/wiki/index.php/Main_Page][Grid engine wiki]] * [[http://www.oracle.com/technetwork/oem/grid-engine-166852.html][Oracle Grid Engine project home page]] * [[http://wwwx.cs.unc.edu/Research/bass/index.php/Administrators:Scheduling][Scheduling explanations from cs.unc.edu]] * [[https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide320#The_SGE_batch_system][SGE for glite 3.2 grid environments]] ---+ Centrally starting/stopping the service On the admin master node: Starting: <pre> ssh t3ce01 /etc/init.d/sgemaster start cexec wn: /etc/init.d/sgeexecd start </pre> Stopping: <pre> cexec wn: /etc/init.d/sgeexecd stop ssh t3ce01 /etc/init.d/sgemaster stop </pre> --- ---+ SGE Installation --- ---++ Cluster Planning ---+++ Hosts <a name="cluster_table"> <pre> ======================================== host execute submit admin master ---------------------------------------- t3ce01 N Y Y <b>Y</b> t3ui01 N Y Y t3wn01 Y Y N t3wn02 Y Y N t3wn03 Y Y N t3wn04 Y Y N t3wn05 Y Y N t3wn06 Y Y N t3wn07 Y Y N t3wn08 Y Y N ======================================== </pre> ---+++ Environment <pre> SGE_ROOT=/swshare/sge/n1ge6 SGE_CELL=tier3 </pre> ---+++ SGE Admon The SGE administrator account is *sgeadmin*. ---+++ SGE Services <pre> # to be added to /etc/services sge_qmaster 536/tcp # SGE batch system master sge_execd 537/tcp # SGE batch system execd </pre> --- ---++ Prerequisits * =SGE_ROOT= is NFS mounted root rw on all nodes. * The root can do passwordless SSH (to be relaxed). * The *sgeadmin* HOME is NFS mounted rw on all nodes. * The *sgeadmin* user can do passwordless SSH between any two nodes. * The SGE services are defined in /etc/services on all nodes. Here _all nodes_ means all the nodes listed in <a href="#cluster_table">the cluster table</a>. --- ---++ Download and Install the SGE Software <pre> mkdir /swshare/sge/download cd /swshare/sge/download rsync -av -e ssh markushin@pc4731:/scratch/sge /swshare/sge/download/ <b>export SGE_ROOT=/swshare/sge/n1ge6</b> <b>export SGE_CELL=tier3</b> <b>mkdir $SGE_ROOT</b> <b>cd $SGE_ROOT</b> pwd /swshare/sge/n1ge6 <b>tar xvzf /swshare/sge/download/sge/ge-6.1u4-common.tar.gz</b> <b>tar xvzf /swshare/sge/download/sge/ge-6.1u4-bin-lx24-amd64.tar.gz</b> </pre> <span class=TFBB> tree -d /swshare/sge/n1ge6 </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> /swshare/sge/n1ge6 |-- 3rd_party | `-- qmon |-- bin | `-- lx24-amd64 |-- catman | |-- a_man | | |-- cat5 | | `-- cat8 | |-- cat | | |-- cat1 | | |-- cat3 | | |-- cat5 | | `-- cat8 | |-- p_man | | `-- cat3 | `-- u_man | `-- cat1 |-- ckpt |-- doc | |-- bdbdocs | | `-- utility | `-- javadocs | |-- com | | `-- sun | | `-- grid | | `-- drmaa | |-- org | | `-- ggf | | `-- drmaa | `-- resources |-- dtrace |-- examples | |-- drmaa | |-- jobs | `-- jobsbin | `-- lx24-amd64 |-- include |-- lib | `-- lx24-amd64 |-- man | |-- man1 | |-- man3 | |-- man5 | `-- man8 |-- mpi | |-- SunHPCT5 | `-- myrinet |-- pvm | `-- src |-- qmon | `-- PIXMAPS | `-- big |-- util | |-- install_modules | |-- rctemplates | |-- resources | | |-- calendars | | |-- centry | | |-- loadsensors | | |-- pe | | |-- schemas | | | |-- qhost | | | |-- qquota | | | `-- qstat | | |-- starter_methods | | `-- usersets | `-- sgeCA `-- utilbin `-- lx24-amd64 69 directories </pre> %ENDTWISTY% --- <a name="sge_conf"> ---++ Edit the SGE Configuration File See the comments in *$SGE_ROOT/util/install_modules/tier3.conf* for details. <span class=TFBB> SGE_ROOT/util/install_modules/tier3.conf </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> grep -v ^# $SGE_ROOT/util/install_modules/tier3.conf | sed '/^$/d' SGE_ROOT="/swshare/sge/n1ge6" SGE_QMASTER_PORT="536" SGE_EXECD_PORT="537" CELL_NAME="tier3" ADMIN_USER="sgeadmin" QMASTER_SPOOL_DIR="/var/spool/sge/qmaster" EXECD_SPOOL_DIR="/var/spool/sge" GID_RANGE="50700-50800" SPOOLING_METHOD="classic" DB_SPOOLING_SERVER="none" DB_SPOOLING_DIR="/var/spool/sge/spooldb" PAR_EXECD_INST_COUNT="8" ADMIN_HOST_LIST="t3admin01 t3ce01 t3ui01" SUBMIT_HOST_LIST="t3ce01 t3ui01 t3wn01 t3wn02 t3wn03 t3wn04 t3wn05 t3wn06 t3wn07 t3wn08" EXEC_HOST_LIST="t3wn01 t3wn02 t3wn03 t3wn04 t3wn05 t3wn06 t3wn07 t3wn08" EXECD_SPOOL_DIR_LOCAL="" HOSTNAME_RESOLVING="true" SHELL_NAME="ssh" COPY_COMMAND="scp" DEFAULT_DOMAIN="none" ADMIN_MAIL="none" ADD_TO_RC="false" SET_FILE_PERMS="true" RESCHEDULE_JOBS="wait" SCHEDD_CONF="1" SHADOW_HOST="" EXEC_HOST_LIST_RM="" REMOVE_RC="true" WINDOWS_SUPPORT="false" WIN_ADMIN_NAME="Administrator" WIN_DOMAIN_ACCESS="false" CSP_RECREATE="true" CSP_COPY_CERTS="false" CSP_COUNTRY_CODE="CH" CSP_STATE="Switzerland" CSP_LOCATION="Building" CSP_ORGA="PSI" CSP_ORGA_UNIT="AIT" CSP_MAIL_ADDRESS="derek.feichtinger@psi.ch" </pre> %ENDTWISTY% --- ---++ Install the SGE Master Login as root to the master host. The SGE_ROOT and QMASTER_SPOOL_DIR must be writable by root (see <a href="#sge_conf">SGE_ROOT/util/install_modules/tier3.conf</a>). Run the following commands: <pre> hostname t3ce01 whoami root <b>export SGE_ROOT=/swshare/sge/n1ge6</b> <b>export SGE_CELL=tier3</b> <b>cd $SGE_ROOT</b> <b>./inst_sge -m -auto $SGE_ROOT/util/install_modules/tier3.conf</b> ... Install log can be found in: \ /swshare/sge/n1ge6/tier3/common/install_logs/qmaster_install_t3ce01_2008-08-11_17:47:44.log </pre> <span class=TFBB> /swshare/sge/n1ge6/tier3/common/install_logs/qmaster_install_t3ce01_2008-08-11_17:47:44.log </span> %TWISTY{showlink=" Show File " hidelink=" Hide File " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre>Starting qmaster installation! Installing Grid Engine as user >sgeadmin< Your $SGE_ROOT directory: /swshare/sge/n1ge6 Using SGE_QMASTER_PORT >536<. Using SGE_EXECD_PORT >537<. Using >tier3< as CELL_NAME. Using >/var/spool/sge/qmaster< as QMASTER_SPOOL_DIR. Verifying and setting file permissions and owner in >3rd_party< Verifying and setting file permissions and owner in >bin< Verifying and setting file permissions and owner in >ckpt< Verifying and setting file permissions and owner in >examples< Verifying and setting file permissions and owner in >inst_sge< Verifying and setting file permissions and owner in >install_execd< Verifying and setting file permissions and owner in >install_qmaster< Verifying and setting file permissions and owner in >lib< Verifying and setting file permissions and owner in >mpi< Verifying and setting file permissions and owner in >pvm< Verifying and setting file permissions and owner in >qmon< Verifying and setting file permissions and owner in >util< Verifying and setting file permissions and owner in >utilbin< Verifying and setting file permissions and owner in >catman< Verifying and setting file permissions and owner in >doc< Verifying and setting file permissions and owner in >include< Verifying and setting file permissions and owner in >man< Your file permissions were set Using >true< as IGNORE_FQDN_DEFAULT. If it's >true<, the domain name will be ignored. Making directories Setting spooling method to dynamic Dumping bootstrapping information Initializing spooling database Using >50700-50800< as gid range. Using >/var/spool/sge< as EXECD_SPOOL_DIR. Using >none< as ADMIN_MAIL. Reading in complex attributes. Adding default parallel environments (PE) Reading in parallel environments: PE "make.sge_pqs_api". Reading in usersets: Userset "deadlineusers". Userset "defaultdepartment". starting sge_qmaster starting sge_schedd starting up GE 6.1u4 (lx24-amd64) Adding ADMIN_HOST t3admin01 t3admin01 added to administrative host list Adding ADMIN_HOST t3ce01 adminhost "t3ce01" already exists Adding ADMIN_HOST t3ui01 t3ui01 added to administrative host list Adding SUBMIT_HOST t3ce01 t3ce01 added to submit host list Adding SUBMIT_HOST t3ui01 t3ui01 added to submit host list Adding SUBMIT_HOST t3wn01 t3wn01 added to submit host list Adding SUBMIT_HOST t3wn02 t3wn02 added to submit host list Adding SUBMIT_HOST t3wn03 t3wn03 added to submit host list Adding SUBMIT_HOST t3wn04 t3wn04 added to submit host list Adding SUBMIT_HOST t3wn05 t3wn05 added to submit host list Adding SUBMIT_HOST t3wn06 t3wn06 added to submit host list Adding SUBMIT_HOST t3wn07 t3wn07 added to submit host list Adding SUBMIT_HOST t3wn08 t3wn08 added to submit host list Creating the default <all.q> queue and <allhosts> hostgroup root@t3ce01 added "@allhosts" to host group list root@t3ce01 added "all.q" to cluster queue list No CSP system installed! No CSP system installed! Setting scheduler configuration to >Normal< setting! changed scheduler configuration sge_qmaster successfully installed! </pre> %ENDTWISTY% Test the master: <pre> <b>ps auxwf | grep [s]ge </b> sgeadmin 1322 0.0 0.0 77744 3248 ? Sl 17:47 0:00 /swshare/sge/n1ge6/bin/lx24-amd64/sge_qmaster sgeadmin 1341 0.0 0.0 66672 2228 ? Sl 17:47 0:00 /swshare/sge/n1ge6/bin/lx24-amd64/sge_schedd <b>export PATH=$SGE_ROOT/bin/lx24-amd64:$PATH</b> <b>qconf -sconf</b> </pre> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> [root@t3ce01 n1ge6]# qconf -sconf global: execd_spool_dir /var/spool/sge mailer /bin/mail xterm /usr/bin/X11/xterm load_sensor none prolog none epilog none shell_start_mode posix_compliant login_shells sh,ksh,csh,tcsh min_uid 0 min_gid 0 user_lists none xuser_lists none projects none xprojects none enforce_project false enforce_user auto load_report_time 00:00:40 max_unheard 00:05:00 reschedule_unknown 00:00:00 loglevel log_warning administrator_mail none set_token_cmd none pag_cmd none token_extend_time none shepherd_cmd none qmaster_params none execd_params none reporting_params accounting=true reporting=false \ flush_time=00:00:15 joblog=false sharelog=00:00:00 finished_jobs 100 gid_range 50700-50800 qlogin_command telnet qlogin_daemon /usr/sbin/in.telnetd rlogin_daemon /usr/sbin/in.rlogind max_aj_instances 2000 max_aj_tasks 75000 max_u_jobs 0 max_jobs 0 auto_user_oticket 0 auto_user_fshare 0 auto_user_default_project none auto_user_delete_time 86400 delegated_file_staging false reprioritize 0 </pre> %ENDTWISTY% --- ---++ Install the SGE Execute Hosts NOTE: to run the install script below locally on the execute node, you will need to define it as a submit and admin host. This requires adding the host using the =qconf -ah hostname= (for admin host) and =qconf -as hostname= (for submit host) commands from an admin node. Else, the new node will not be allowed to contact the master's services. On each execute host, where the SGE_ROOT and EXECD_SPOOL_DIR must be writable by root, (see <a href="#sge_conf">SGE_ROOT/util/install_modules/tier3.conf</a>) do the following: <pre> hostname t3wn01 whoami root <b>export SGE_ROOT=/swshare/sge/n1ge6</b> <b>export SGE_CELL=tier3</b> <b>mkdir /var/spool/sge/t3wn01</b> <b>chown sgeadmin.root /var/spool/sge/t3wn01</b> <b>cd $SGE_ROOT</b> <b>./inst_sge -x -noremote -auto $SGE_ROOT/util/install_modules/tier3.conf</b> ... Install log can be found in: \ /swshare/sge/n1ge6/tier3/common/install_logs/execd_install_t3wn01_2008-08-11_21:04:43.lo </pre> Note: If the script fails uncleanly, you can find the logs in /tmp/install.[PID]. Add the *sgeexecd* service manually: <pre> <b>cp -a $SGE_ROOT/$SGE_CELL/common/sgeexecd /etc/init.d/</b> <b>chkconfig --add sgeexecd</b> <b>chkconfig --list sgeexecd</b> sgeexecd 0:off 1:off 2:off 3:on 4:off 5:on 6:off </pre> Check the SGE services on this host - there should be *sge_execd* running as *sgeadmin* on every execute host, e.g.: <pre> <b>ps auxwf | grep [s]ge</b> sgeadmin 25309 0.0 0.0 56108 1624 ? S 21:30 0:00 /swshare/sge/n1ge6/bin/lx24-amd64/sge_execd </pre> Test some SGE commands: <pre> export PATH=$SGE_ROOT/bin/lx24-amd64:$PATH <b>qconf -sel</b> t3wn01 <b>qconf -sql</b> all.q </pre> <span class=TFBB> ls -lA /var/spool/sge/t3wn01 </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> drwxr-xr-x 2 sgeadmin sgeadmin 4096 Aug 11 21:30 active_jobs -rw-r--r-- 1 sgeadmin sgeadmin 6 Aug 11 21:30 execd.pid drwxr-xr-x 2 sgeadmin sgeadmin 4096 Aug 11 21:30 jobs drwxr-xr-x 2 sgeadmin sgeadmin 4096 Aug 11 21:30 job_scripts -rw-r--r-- 1 sgeadmin sgeadmin 69 Aug 11 21:30 messages </pre> %ENDTWISTY% ---++ How to remove SGE Execute Hosts from the configuration %N% remove the host from the relevant queues <pre> qconf -mq all.q </pre> Delete host from its host group (e.g. the "allhosts" group): <pre> qconf -mhgrp @allhosts </pre> Remove host from exec host list (and possibly also from admin and submission lists): <pre> qconf -de t3vm02 qconf -dh t3vm02 qconf -ds t3vm02 </pre> Remove from configuration list: <pre> $ qconf -dconf t3vm02 </pre> ---++ Files The following files must be installed on all hosts: <span class=TFBB> /etc/profile.d/sge.sh </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> # SGE configuration for CMS Tier3 # 2008-08-11 export SGE_ROOT=/swshare/sge/n1ge6 export SGE_CELL=tier3 export PATH=$SGE_ROOT/bin/lx24-amd64:$PATH export MANPATH=$MANPATH:$SGE_ROOT/man: </pre> %ENDTWISTY% ---++ Install the SGE Submit Hosts A submission host just needs access to the shared SGE installation for the binaries. Then, it needs to be configured as one of the allowed submission hosts by running the following command on the master <pre> qconf -as [hostname] </pre> After that, you should be able to run commands like =qhost= from the new host. --- ---+ SGE Postinstallation Configuration ---++ Configure SGE Queues ---+++ all.q configuration Edit the queue configuration using the *qconf -mq all.q* command: <pre> ssh sgeadmin@t3ce01 qconf -sq all.q > ~/config/all.q.orig.conf export EDITOR=vim <b>qconf -mq all.q</b> sgeadmin@t3ce01 modified "all.q" in cluster queue list qconf -sq all.q > ~/config/all.q.conf diff /shome/sgeadmin/config/all.q.{orig.,}conf 17c17 < shell /bin/csh --- > shell <b>/bin/bash</b> 20c20 < shell_start_mode posix_compliant --- > shell_start_mode <b>unix_behavior</b> 35,38c35,38 < s_rt INFINITY < h_rt INFINITY < s_cpu INFINITY < h_cpu INFINITY --- > s_rt 48:00:00 > h_rt 48:30:00 > s_cpu 24:00:00 > h_cpu 24:30:00 </pre> <span class=TFBB> qconf -sq all.q </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> qname all.q hostlist @allhosts seq_no 0 load_thresholds np_load_avg=1.75 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH INTERACTIVE ckpt_list NONE pe_list make rerun FALSE slots 1,[t3wn01=8],[t3wn02=8],[t3wn03=8] tmpdir /tmp shell /bin/bash prolog NONE epilog NONE shell_start_mode unix_behavior starter_method %RED%/shome/sgeadmin/t3scripts/starter_method.sh %ENDCOLOR% suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists NONE xuser_lists NONE subordinate_list NONE complex_values NONE projects NONE xprojects NONE calendar NONE initial_state default s_rt 48:00:00 h_rt 48:30:00 s_cpu 24:00:00 h_cpu 24:30:00 s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY </pre> %ENDTWISTY% ---+++ Setting the queue's Grid/CMS Environment %N% In order to set the correct Grid environment on the worker nodes, the default starter method of the queue is overridden by a simple script: <pre %FILESTYLE%> #!/bin/bash ######### STARTER METHOD FOR SETTING USER'S ENVIRONMENT ##################### # settings for Grid credentials if test x"$DBG" != x; then echo "STARTER METHOD SCRIPT: Setting grid environment" fi source /swshare/glite/external/etc/profile.d/grid-env.sh if test $? -ne 0; then echo "WARNING: Failed to source grid environment" >&2 fi #source $VO_CMS_SW_DIR/cmsset_default.sh #if test $? -ne 0; then # echo "WARNING: Failed to source the CMS environment ($VO_CMS_SW_DIR/cmsset_default.sh)" >&2 #fi # now we execute the real job script exec "$@" </pre> ---+++ Printing out accounting information at the end of a job %N% One can use the a queue's epilog setting to execute a script at the end of every job (use =qconf -mq=). E.g. this script will attach some accounting information to the job's stdout (File =/shome/sgeadmin/t3scripts/epilog.sh=) <pre %FILESTYLE%> echo "# JOB Resource USAGE for job $JOB_ID:" echo -n "# " /swshare/sge/n1ge6/bin/lx24-amd64/qstat -j "$JOB_ID"| grep -e '^usage.*cpu=' </pre> ---+++ Configuring the scheduling policy %N% The original configuration did no real fair share scheduling. After reading up a bit on how to implement share tree policies and seeing that this needs a lot of configuration and additional maintenance of user lists (I think), I decided to go for an easier [[https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2008-July/031927.html][functional policy as mentioned on a mailing list]]. Modify the master configuration <pre> # qconf -mconf ... enforce_user auto ... auto_user_fshare 100 ... </pre> And then the scheduler configuration <pre> # qconf -msconf ... weight_tickets_functional 10000 ... </pre> The rationale for doing this is described as follows by an expert (Chris @ gridengine.info): <pre> If you are only using the functional policy in a way described by that article than ... - The number "10000" shown in that configuration suggestion is arbitrary - Any number higher than zero simply "turns on" the policy within the scheduler - The number of functional tickets you have does not matter all that much - The *ratio* of tickets you hold vs. tickets others hold matters very very much - No relation to halftime In the simple functional setup described in that article the key is that we (a) enable the functional policy by telling SGE there are 10000 tickets in the system and (b) we automatically assign every user 100 functional share tickets. What makes the scheduling policy then become "fair" is the fact that all users have the same number/ratio of functional share tickets (100). This makes them all get treated equally by the scheduler. </pre> ---++ Configure a parallel environment for SMP parallel jobs %N% show all existing parallel environments <pre> qconf -spl </pre> Define a new parallel environment "smp" <pre> qconf -ap smp pe_name smp slots 999 user_lists NONE xuser_lists NONE start_proc_args /bin/true stop_proc_args /bin/true allocation_rule $pe_slots # this forces all slots to be on the same host! control_slaves FALSE job_is_first_task TRUE urgency_slots min </pre> Now, this environment must be added to all queues that should give access to it <pre> qconf -mq all.q ... pe_list smp make ... </pre> Users can submit jobs by using the =-pe= flag together with the environment name and the number of requested slots <pre> qsub -pe smp 4 myjob.sge </pre> ---++ Limiting memory consumption on a per host basis %N% The =h_vmem= complex property is the hard limit on job memory consumption. This is actually enforced by SGE, and a job will be killed when it tries ro allocate beyond this limit. In order to do a correct bookkeeping for jobs already present on the node, it is necessary to declare this property to be a "consumable" property. Also, one should immediately assign a default value for jobs which do not explicitely declare the requirement. This can be done by editing the complex configuration: <pre> qconf -mc #name shortcut type relop requestable consumable default urgency #---------------------------------------------------------------------------------------- ... h_vmem h_vmem MEMORY <= YES YES 2.5g 0 ... </pre> Now, one can assign explicit h_vmem limits to hosts using <pre> qconf -me t3wn04 hostname t3wn04.psi.ch load_scaling NONE complex_values h_vmem=15g user_lists NONE xuser_lists NONE projects NONE xprojects NONE usage_scaling NONE report_variables NONE </pre> A user can now declare the requirement for the vmem in his submit statement <pre> qsub -l h_vmem=10g simple_env.sge qsub -pe smp 4 -l h_vmem=2g simple_env.sge </pre> Note that the requirement is per requested slot, so in the latter case the required vmem is 8GB! --- ---+ SGE Basic Tests ---++ Available Queues and Slots <pre> <b>which qstat</b> /swshare/sge/n1ge6/bin/lx24-amd64/qstat # Queues and slots (see "man qstat" for details): <b>qstat -g c</b> CLUSTER QUEUE CQLOAD USED AVAIL TOTAL aoACDS cdsuE ------------------------------------------------------------------------------- all.q 0.00 0 16 16 0 0 # Show the xecute hosts (only the installed hosts are shown): <b>qconf -sel</b> t3wn01 t3wn02 t3wn03 # Show the list of queues: <b>qconf -sql</b> all.q </pre> Show the admin hosts (see "man qconf" for details) <br> <span class=TFBB> qconf -sh </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> t3admin01 t3ce01 t3ui01 t3wn01 t3wn02 t3wn03 t3wn04 t3wn05 t3wn06 t3wn07 t3wn08 </pre> %ENDTWISTY% Show the given execution host: <br> <span class=TFBB> qconf -se t3wn01 </span> %TWISTY{showlink=" Show Output " hidelink=" Hide Output " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> hostname t3wn01 load_scaling NONE complex_values NONE load_values load_avg=0.000000,load_short=0.000000, \ load_medium=0.000000,load_long=0.000000,arch=lx24-amd64, \ num_proc=8,mem_free=15851.125000M, \ swap_free=1992.425781M,virtual_free=17843.550781M, \ mem_total=16033.703125M,swap_total=1992.425781M, \ virtual_total=18026.128906M,mem_used=182.578125M, \ swap_used=0.000000M,virtual_used=182.578125M, \ cpu=0.000000,np_load_avg=0.000000, \ np_load_short=0.000000,np_load_medium=0.000000, \ np_load_long=0.000000 processors 8 user_lists NONE xuser_lists NONE projects NONE xprojects NONE usage_scaling NONE report_variables NONE </pre> %ENDTWISTY% Show the given queue: <br> <span class=TFBB> qconf -sq all.q </span> %TWISTY{showlink=" Show Output " hidelink=" Hide Output " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> qname all.q hostlist @allhosts seq_no 0 load_thresholds np_load_avg=1.75 suspend_thresholds NONE nsuspend 1 suspend_interval 00:05:00 priority 0 min_cpu_interval 00:05:00 processors UNDEFINED qtype BATCH INTERACTIVE ckpt_list NONE pe_list make rerun FALSE slots 1,[t3wn01=8],[t3wn02=8],[t3wn03=8] tmpdir /tmp shell /bin/bash prolog NONE epilog NONE shell_start_mode unix_behavior starter_method NONE suspend_method NONE resume_method NONE terminate_method NONE notify 00:00:60 owner_list NONE user_lists NONE xuser_lists NONE subordinate_list NONE complex_values NONE projects NONE xprojects NONE calendar NONE initial_state default s_rt 48:00:00 h_rt 48:30:00 s_cpu 24:00:00 h_cpu 24:30:00 s_fsize INFINITY h_fsize INFINITY s_data INFINITY h_data INFINITY s_stack INFINITY h_stack INFINITY s_core INFINITY h_core INFINITY s_rss INFINITY h_rss INFINITY s_vmem INFINITY h_vmem INFINITY </pre> %ENDTWISTY% --- ---++ Test Jobs Use the *simple_env.sge* script to submit a simple single-CPU job: %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> <b>qsub ./simple_env.sge</b> Your job 2 ("simple_env") has been submitted qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 2 0.00000 simple_env sgeadmin qw 08/13/2008 14:12:44 1 qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 2 0.55500 simple_env sgeadmin r 08/13/2008 14:12:49 all.q@t3wn01 1 ls -lA ... -rw-r--r-- 1 sgeadmin sgeadmin 2245 Aug 13 14:12 simple_env.o2 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:12 simple_env.e2 </pre> %ENDTWISTY% <span class=TFBB> simple_env.sge </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> #!/bin/bash # SGE single-CPU job example ### Job name #$ -N simple_env ### Run time soft and hard limits hh:mm:ss #$ -l s_rt=00:01:00,h_rt=00:01:30 ### Change to the current working directory #$ -cwd MY_HOST=`hostname` MY_DATE=`date` echo "Running on $MY_HOST at $MY_DATE" echo "Running environment:" env echo "================================================================" # Put your single-CPU script here ################################################################################ </pre> %ENDTWISTY% Use the *simple_job_array.sge* script to test a job array: %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> qsub -q all.q -t 1-16 ./simple_job_array.sge Your job-array 3.1-16:1 ("simple_job_array") has been submitted qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 3 0.55500 simple_job sgeadmin r 08/13/2008 14:25:49 all.q@t3wn01 1 16 ls -lA simple_job_array* -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.1 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.10 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.11 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.12 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.13 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.14 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.15 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.16 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.2 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.3 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.4 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.5 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.6 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.7 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.8 -rw-r--r-- 1 sgeadmin sgeadmin 0 Aug 13 14:25 simple_job_array.e3.9 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.1 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.10 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.11 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.12 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.13 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.14 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.15 -rw-r--r-- 1 sgeadmin sgeadmin 736 Aug 13 14:25 simple_job_array.o3.16 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.2 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.3 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.4 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.5 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.6 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.7 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.8 -rw-r--r-- 1 sgeadmin sgeadmin 731 Aug 13 14:25 simple_job_array.o3.9 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-1 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-10 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-11 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-12 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-13 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-14 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-15 -rw-r--r-- 1 sgeadmin sgeadmin 101 Aug 13 14:25 simple_job_array.out.3-16 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-2 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-3 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-4 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-5 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-6 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-7 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-8 -rw-r--r-- 1 sgeadmin sgeadmin 100 Aug 13 14:25 simple_job_array.out.3-9 grep t3wn simple_job_array.out.* simple_job_array.out.3-1:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=1 on t3wn02 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-10:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=10 on t3wn01 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-11:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=11 on t3wn03 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-12:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=12 on t3wn02 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-13:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=13 on t3wn02 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-14:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=14 on t3wn03 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-15:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=15 on t3wn01 at Wed Aug 13 14:25:50 CEST 2008 simple_job_array.out.3-16:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=16 on t3wn01 at Wed Aug 13 14:25:50 CEST 2008 simple_job_array.out.3-2:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=2 on t3wn03 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-3:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=3 on t3wn01 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-4:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=4 on t3wn01 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-5:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=5 on t3wn03 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-6:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=6 on t3wn02 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-7:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=7 on t3wn02 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-8:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=8 on t3wn03 at Wed Aug 13 14:25:49 CEST 2008 simple_job_array.out.3-9:Running job JOB_NAME=simple_job_array task SGE_TASK_ID=9 on t3wn01 at Wed Aug 13 14:25:49 CEST 2008 </pre> %ENDTWISTY% <span class=TFBB> simple_job_array.sge </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> #!/bin/bash # SGE single-CPU job array example ### Job name #$ -N simple_job_array ### Run time soft and hard limits hh:mm:ss #$ -l s_rt=00:01:00,h_rt=00:01:30 ### Change to the current working directory #$ -cwd ### Export some environment varaibles: #$ -v MY_PREFIX=simple_job_array.out MY_HOST=`hostname` MY_DATE=`date` cat <<EOF ================================================================" JOB_NAME=$JOB_NAME JOB_ID=$JOB_ID SGE_TASK_ID=$SGE_TASK_ID SGE_TASK_FIRST=$SGE_TASK_FIRST SGE_TASK_LAST=$SGE_TASK_LAST NSLOTS=$NSLOTS QUEUE=$QUEUE SGE_CWD_PATH=$SGE_CWD_PATH PATH=$PATH SGE_STDIN_PATH=$SGE_STDIN_PATH SGE_STDOUT_PATH=$SGE_STDOUT_PATH SGE_STDERR_PATH=$SGE_STDERR_PATH SGE_O_HOST=$SGE_O_HOST SGE_O_PATH=$SGE_O_PATH PE_HOSTFILE=$PE_HOSTFILE ================================================================ EOF echo "Running job JOB_NAME=$JOB_NAME task SGE_TASK_ID=$SGE_TASK_ID on $MY_HOST at $MY_DATE" | tee ${MY_PREFIX}.${JOB_ID}-${SGE_TASK_ID} # Put your single-CPU script here. # Each job in a job array has a unique task id $TASK_ID; # the interval of task ids is defined by the "-t min-max" argument of qsub, .e.g. # qsub -q all.q -t 1-16 ./simple_job_array.sge ################################################################################ </pre> %ENDTWISTY% --- ---+ Bug Fixes ---+++ The directory =/etc/init.d= must be a link Note: _Due to a glitch in the configuration management area (the management uses rsync from a config area) the =/etc/init.d= had been replaced by a real directory._ The directory =/etc/init.d= must be a link: =/etc/init.d -> rc.d/init.d= (strange things may start to happen if this is not the case). Fix it on t3ce01. <pre> [root@t3ce01 rc3.d]# date Mon Aug 11 22:22:10 CEST 2008 ls -lA /etc/init.d -rwxr-xr-x 1 root root 1243 Aug 11 10:06 gmond -rwxr-xr-x 1 root root 4210 Aug 11 09:58 ramdisk -rwxr-xr-x 1 sgeadmin sgeadmin 15679 Aug 11 17:47 sgemaster cp -a /etc/init.d/sgemaster /etc/rc.d/init.d/ ls -lAtr /etc/rc.d/init.d/ ... -rwxr-xr-x 1 root root 4210 Aug 11 09:58 ramdisk -rwxr-xr-x 1 root root 1243 Aug 11 10:06 gmond -rwxr-xr-x 1 sgeadmin sgeadmin 15679 Aug 11 17:47 sgemaster rm -rf /etc/init.d ln -s /etc/rc.d/init.d /etc/init.d # Now chkconfig works as it should (it did not before): chkconfig --add sgemaster chkconfig --list sgemaster sgemaster 0:off 1:off 2:off 3:on 4:off 5:on 6:off </pre> --- ---+ Troubleshooting ---++ Installation Troubleshooting ---+++ Missing output The *inst_sge* script tries to hide its output (omitting my comments on its design), so that nothing may be printed on the console if the thing go wrong even if you uncomment the ="# set -x"= line. If this happens check the file(s) =/tmp/install.NNNNN= for possible reasons, like =Command failed: mkdir -p /var/spool/sge/qmaster=. ---+++ This is not a qmaster host! On a start of the SGE master on t3ce01 I got this error message: <pre> /etc/init.d/sgemaster start sge_qmaster didn't start! This is not a qmaster host! Please, check your act_qmaster file! </pre> Check what the =$SGE_ROOT/utilbin/lx24-amd64/gethostname= returns as hostname. The entry in =$SGE_ROOT/$SGE_CELL/common/act_qmaster= must exactly match this name. In my (Derek's) case the hostname returned by the tool was =t3ce01.psi.ch=, while the file only contained t3ce01. Afterwards I got the following message during startup: <pre> /etc/init.d/sgemaster start starting sge_qmaster starting sge_schedd local configuration t3ce01.psi.ch not defined - using global configuration starting up GE 6.1u4 (lx24-amd64) </pre> Taking a closer look at the startup with =strace= reveals that SGE is looking for an entry for t3ce01.psi.ch in the =$SGE_ROOT/$SGE_CELL/common/local_conf= directory. Since there had not been one for t3ce01 before, I ignored this. ---+++ "This hostname is not known at qmaster as an administrative host" This message is written to the log file when you try to execute a command that can be run only on an admin host, e.g. %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> <b> ./inst_sge -x -noremote -auto $SGE_ROOT/util/install_modules/tier3.conf</b> Your $SGE_ROOT directory: /swshare/sge/n1ge6 Using cell: >tier3< Installation failed! This hostname is not known at qmaster as an administrative host. </pre> %ENDTWISTY% Solution: login to any admin host add the new host to the admin hosts using the <b>qconf -ah </b> command. %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> export SGE_ROOT=/swshare/sge/n1ge6 export SGE_CELL=tier3 export PATH=$SGE_ROOT/bin/lx24-amd64:$PATH <b>qconf -ah t3wn01,t3wn02,t3wn03,t3wn04,t3wn05,t3wn06,t3wn07,t3wn08</b> t3wn01 added to administrative host list t3wn02 added to administrative host list t3wn03 added to administrative host list t3wn04 added to administrative host list t3wn05 added to administrative host list t3wn06 added to administrative host list t3wn07 added to administrative host list t3wn08 added to administrative host list qconf -sh t3admin01 t3ce01 t3ui01 t3wn01 t3wn02 t3wn03 t3wn04 t3wn05 t3wn06 t3wn07 t3wn08 </pre> You can add hosts even if they not available yet. %ENDTWISTY% ---+++ "Local execd spool directory [undef] is not a valid path" The reason must be investigated, but a work around is simple: create the corresponding directory manually. e.g.: <pre> mkdir /var/spool/sge/t3wn01 chown sgeadmin.root /var/spool/sge/t3wn01 </pre> ---+++ Installation Log Files <span class=TFBB> ls -lA /swshare/sge/n1ge6/tier3/common/install_logs </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> -rw-r--r-- 1 sgeadmin sgeadmin 159 Aug 11 21:04 execd_install_t3wn01_2008-08-11_21:04:43.log -rw-r--r-- 1 sgeadmin sgeadmin 567 Aug 11 21:20 execd_install_t3wn01_2008-08-11_21:20:31.log -rw-r--r-- 1 sgeadmin sgeadmin 15249 Aug 11 21:24 execd_install_t3wn01_2008-08-11_21:24:51.log -rw-r--r-- 1 sgeadmin sgeadmin 15249 Aug 11 21:27 execd_install_t3wn01_2008-08-11_21:27:38.log -rw-r--r-- 1 sgeadmin sgeadmin 15117 Aug 11 21:30 execd_install_t3wn01_2008-08-11_21:30:50.log -rw-r--r-- 1 sgeadmin sgeadmin 707 Aug 11 21:31 execd_install_t3wn01_2008-08-11_21:31:00.log -rw-r--r-- 1 sgeadmin sgeadmin 443 Aug 11 21:57 execd_install_t3wn02_2008-08-11_21:57:48.log -rw-r--r-- 1 sgeadmin sgeadmin 3068 Aug 11 17:47 qmaster_install_t3ce01_2008-08-11_17:47:44.log </pre> %ENDTWISTY% ---+++ Uninstalling Execution Hosts *Note* - Uninstall all compute hosts before you uninstall the master host. If you uninstall the master host first, you have to uninstall all execution hosts manually. During the execution host uninstallation, all configuration information for the targeted hosts is deleted. The uninstallation attempts to stop the exec hosts in a graceful manner, then the execution daemon will be shut down, then the configuration, global spool directory or local spool directory will be removed. $SGE_ROOT/util/install_modules/tier3.conf has a section for identifying hosts that can be uninstalled automatically <pre> # Remove this execution hosts in automatic mode EXEC_HOST_LIST_RM="host1 host2 host3 host4" </pre> Every host in the EXEC_HOST_LIST_RM list will be automatically removed from the cluster. To start the automatic uninstallation of execution hosts, type the following command: <pre> ./inst_sge -ux -auto $SGE_ROOT/util/install_modules/tier3.conf </pre> For more information, please consult the page: http://docs.sun.com/app/docs/doc/820-0697/gesal?a=view ---++ Runtime Troubleshooting ---+++ Logfiles The default log file for the master can be found in the master's spool area at =/var/spool/sge/qmaster/messages=. --- ---++ Settings * This section contains a hidden (html) block where the local TWiki variables are set. <!-- * Set USERSTYLEURL = https://twiki.cscs.ch/twiki/pub/CmsTier3/SunGridEngine/my_twiki_styles.css * Set FCR=<font color="red"> * Set FCG=<font color="green"> * Set FCB=<font color="blue"> * Set FCC=<font color="cyan"> * Set FCV=<font color="violet"> * Set FCK=<font color="black"> * Set FE=</font> --> <!-- <hr> <span class=TFBB> ... </span> %TWISTY{showlink=" Show File " hidelink=" Hide File " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> </pre> %ENDTWISTY% <span class=TFBB> ... </span> %TWISTY{showlink=" Show Details " hidelink=" Hide Details " showimgleft="%ICONURLPATH{viewtopic}%" hideimgleft="%ICONURLPATH{toggleclose-small}%"}% <pre> </pre> %ENDTWISTY% --> * [[%ATTACHURL%/my_twiki_styles.css][my_twiki_styles.css]]: My CSS
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
css
my_twiki_styles.css
r1
manage
1.9 K
2008-08-13 - 12:47
ValeriMarkushin
My CSS
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r22
<
r21
<
r20
<
r19
<
r18
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r19 - 2011-03-17
-
DerekFeichtinger
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback