Node Type: OLDNFSServer

Serves home directories and experiment's software directories to t3wn* and t3ui* servers.

Firewall requirements

local port open to reason


Regular Maintenance work

Have a look to our t3fs05 Nagios page

There are root crons that have to run:
root@t3fs05 $ crontab -l
#ident  "@(#)root       1.21    04/03/23 SMI"
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#
# The rtc command is run to adjust the real time clock if and when
# daylight savings time changes.
#
1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___
# Added by cswcrontab for CSWlogwatch
02 4 * * * /opt/csw/bin/logwatch
# cron to replicate fs /swshare vs t3fs06; please note that there is an equivalent cron on t3fs06, it must to be a delay between these 2 crons because they heavily use the network and there is a command 'kill mbuffer' that can crash the other cron
00 06 * * * /root/psit3-tools/regular-snapshot-new -f swshare -v -s t3fs06 -r swshare2/swsharebup 2>&1 | /usr/bin/tee /var/cron/lastsnap.txt 2>&1 ; [[ $? -ne 0 ]] && /usr/bin/mail cms-tier3@lists.psi.ch  < /var/cron/lastsnap.txt
43 3 * * * [ -x /opt/csw/bin/gupdatedb ] && /opt/csw/bin/gupdatedb --prunepaths="/shome2 /swshare /dev /devices /proc /tmp /var/tmp" 1>/dev/null 2>&1 # Added by CSWfindutils
#
# for ganglia monitoring of swshare space
59 * * * * /root/gmetric/gmetric_partition_space-cron.sh
#
# 5th June 2013 - F.Martinelli - to get the EMI WN tarball CRLs updated
39 12 * * * /opt/fetch-crl/fetch-crl -c /opt/fetch-crl/fetch-crl.cnf -v  2>&1 | /usr/bin/tee /var/cron/fetch-crl.log 2>&1 

And cron jobs with the cmssgm user to create backups of external resources (make sure the user has a NP entry in /etc/shadow/ and not an *LK* entry otherwise the cron job will not run)

root@t3fs05 $ crontab -l cmssgm
# fetch backups of remote repositories
# for more details see https://wiki.chipp.ch/twiki/bin/view/CmsTier3/RemoteRepositoryBackupAdmin
37 2 * * * PATH=/opt/csw/sbin:/opt/csw/bin:/usr/sbin:/usr/bin /swshare/rrbackup/bin/run_rrb.py > /dev/null 2>&1

Installation

SGE dir used during a new WN installation

This dir is not strictly required by t3fs05 but it's going to be used during a new t3wn* installation ( so very seldomly ):
[martinelli_f@t3wn10 execution-host-configuration]$ pwd
/swshare/sge/execution-host-configuration
[martinelli_f@t3wn10 execution-host-configuration]$ find .
.
./default
./default/readme_2011.05.04_16:05:09.html
./default/spool
./default/spool/t3wnexample
./default/spool/t3wnexample/messages
./default/spool/t3wnexample/execd.pid
./default/spool/t3wnexample/active_jobs
./default/spool/t3wnexample/jobs
./default/spool/t3wnexample/job_scripts
./default/common
./default/common/sgeexecd
./default/common/cluster_name
./default/common/sgemaster
./default/common/schedule
./default/common/settings.sh
./default/common/act_qmaster
./default/common/sge_aliases
./default/common/sge_qstat
./default/common/bootstrap
./default/common/accounting
./default/common/install_logs
./default/common/install_logs/qmaster_install_t3ce.psi.ch_2011-05-04_16:04:42.log
./default/common/sgedbwriter
./default/common/settings.csh
./default/common/sge_request
./default/common/qtask
./default/common/schedd_runlog
./etc
./etc/init.d
./etc/init.d/sgeexecd.p6444

Installation of the grid functionality for the WNs in the shared software area

How to deploy a new EMI WN tarball

Since June 2013 we migrated to the new EMI WN tarballs, so don't consider the old glite-WN tarballs stored ( we could even simply delete them ) :
root@t3fs05 $ cd /swshare/
root@t3fs05 $ ls -l 
total 1607728
drwxr-xr-x   3 cmssgm   cms            3 Jun 17  2011 CERNLIB
drwxr-xr-x   3 cmssgm   cms            3 Jun 17  2011 CERNLIB-slc5
drwxr-xr-x   3 cmssgm   cms            3 Jun 17  2011 CERNLIB-slc5-32
drwxr-xr-x  21 cmssgm   cms           44 May  9 21:18 CRAB
drwxr-xr-x   6 cmssgm   cms           41 Mar 20  2009 EvtGen
drwxr-xr-x   3 root     root           3 May 24  2012 HEPSPEC2006
drwxr-xr-x   3 cmssgm   cms            3 Aug 21  2012 LHAPDF
drwxr-xr-x  17 cmssgm   cms           25 May  9 23:17 ROOT
drwxr-xr-x   2 root     root           6 Feb 18  2010 RPM
drwxr-xr-x  15 cmssgm   cms           25 Apr  3 10:45 cms
drwxr-xr-x  11 cmssgm   cms           20 Sep  6  2011 cms_old
drwxr-x---   6 postgres nagios         9 Apr  1 19:25 dcache-postgres-backups
drwxr-xr-x   2 3896     root           3 Feb 11  2010 derektest
lrwxrwxrwx   1 root     root          17 Apr 16 17:05 emi -> emi-wn-2.6.0-1_v1   <--------------------
drwxr-xr-x   3 cmssgm   root          13 Feb  4 14:48 emi-wn-2.5.1-1_v1
drwxr-xr-x   3 cmssgm   root          13 Apr 16 17:04 emi-wn-2.6.0-1_v1
lrwxrwxrwx   1 root     root          17 Mar 15  2012 glite -> glite-WN-3.2.12-1
drwxr-xr-x   5 cmssgm   cms            9 Mar 13  2012 glite-UI
drwxr-xr-x  10 cmssgm   cms           14 Apr 17  2009 glite-WN-3.1.14-0
drwxr-xr-x  10 cmssgm   cms           13 May  8  2009 glite-WN-3.1.27-0
drwxr-xr-x  11 cmssgm   cms           16 Dec 11  2009 glite-WN-3.1.34-0
drwxr-xr-x  10 cmssgm   cms           12 Dec 11  2009 glite-WN-3.1.40-0
drwxr-xr-x   9 cmssgm   cms            9 Mar 15  2012 glite-WN-3.2.12-1
-rw-r--r--   1 root     root     374763520 Nov 14  2011 glite-WN-3.2.12-1.sl5-external.tar
-rw-r--r--   1 root     root     448317440 Nov 14  2011 glite-WN-3.2.12-1.sl5.tar
drwxr-x---   5 2980     cms            8 May 14 11:11 pnfs.files.to.be.deleted
drwxr-xr-x   6 root     root           6 Apr  9  2010 psit3
drwxr-xr-x   6 root     root           7 May 10  2012 sge
root@t3fs05 $
If you need to update the EMI WN tarball installation you can simply get it by wget and 'untar' it logged as root on t3fs05; important is that at the end of the untar you assign all the files to cmssgm.cms:
root@t3fs05 $ pwd
/swshare/emi
root@t3fs05 $ ls -l 
total 931433
lrwxrwxrwx   1 cmssgm   cms           17 Apr 16 16:52 emi-wn -> emi-wn-2.6.0-1_v1
drwxr-xr-x  10 cmssgm   cms           12 Feb  6 22:06 emi-wn-2.6.0-1_v1
-rw-r--r--   1 cmssgm   root     223088640 Apr  8 14:25 emi-wn-2.6.0-1_v1.sl5
-rw-r--r--   1 cmssgm   root        1450 Apr  8 14:25 emi-wn-2.6.0-1_v1.sl5.os-extras-rpmlist.txt
-rw-r--r--   1 cmssgm   root     253644800 Apr  8 14:25 emi-wn-2.6.0-1_v1.sl5.os-extras.tar
-rw-r--r--   1 cmssgm   root        5275 Apr  8 14:25 emi-wn-2.6.0-1_v1.sl5.rpmlist.txt
lrwxrwxrwx   1 cmssgm   cms           10 Apr 16 17:03 manually_copied_by_fabio -> setupwn.sh
lrwxrwxrwx   1 root     root          32 Jun 14 13:35 manually_rsync_copied_from_a_UI_by_fabio -> emi-wn/etc/grid-security/vomsdir
-rw-r--r--   1 cmssgm   root          59 Feb  7 17:46 md5sum-emi-wn-2.6.0-1_v1.sl5.gz.txt
-rw-r--r--   1 cmssgm   root          70 Feb  7 17:47 md5sum-emi-wn-2.6.0-1_v1.sl5.os-extras.tgz.txt
-rw-r--r--   1 cmssgm   cms         7458 Apr 16 17:04 setupwn.sh
Note that

How to deploy the new lcg-CAs into /swshare/emi/emi-wn/etc/etc/grid-security/certificates

The new CA x509s are uploaded into /swshare/emi/emi-wn/etc/etc/grid-security/certificates from t3admin01 in this was BUT the CA CRLs are locally updated by the root cron 39 12 * * * /opt/fetch-crl/fetch-crl -c /opt/fetch-crl/fetch-crl.cnf -v  2>&1 | /usr/bin/tee /var/cron/fetch-crl.log 2>&1 and their freshness checked daily by Nagios.

Why the WNs are aware of t3fs05:/swshare/emi/emi-wn

The t3wn* servers are aware of the EMI WN tarball because of the SGE starter_method that's configured for each queue:
[martinelli_f@t3wn10 ~]$ qconf -sq all.q | grep starter_method
starter_method        /shome/sgeadmin/t3scripts/starter_method.emi-wn.sh
[martinelli_f@t3wn10 ~]$ qconf -sq long.q | grep starter_method
starter_method        /shome/sgeadmin/t3scripts/starter_method.emi-wn.sh
[martinelli_f@t3wn10 ~]$ qconf -sq short.q | grep starter_method
starter_method        /shome/sgeadmin/t3scripts/starter_method.emi-wn.sh
[martinelli_f@t3wn10 ~]$ qconf -sq debug.q | grep starter_method
starter_method        /shome/sgeadmin/t3scripts/starter_method.emi-wn.sh
in this way:
[martinelli_f@t3wn10 ~]$ cat /shome/sgeadmin/t3scripts/starter_method.emi-wn.sh
#!/bin/bash
######### STARTER METHOD FOR SETTING USER'S ENVIRONMENT #####################

# settings for Grid credentials
if test x"$DBG" != x; then
   echo "STARTER METHOD SCRIPT: Setting grid environment"
fi
#source /swshare/glite/external/etc/profile.d/grid-env.sh
source  /swshare/emi/setupwn.sh   <-----------------------------

if test $? -ne 0; then
   echo "WARNING: Failed to source grid environment" >&2
fi

# the CMS environment should be sourced by the user scripts
#source $VO_CMS_SW_DIR/cmsset_default.sh
#if test $? -ne 0; then
#   echo "WARNING: Failed to source the CMS environment ($VO_CMS_SW_DIR/cmsset_default.sh)" >&2
#fi

# bigger temporary files should be handled on the scratch directory, because
# the /tmp partition is of limited size
# 
export TMPDIR=/scratch/tmpdir-${JOB_ID}.1.${QUEUE}
export TMP=$TMPDIR

# for debugging we save a copy of the job script
cat "$1" > /scratch/jobscript-${JOB_ID}.sh

# now we execute the real script
exec "$@"

Emergency Measures

If t3fs05 falls down then all the t3ui* and the t3wn* servers that mount t3fs05:/swshare will be immediately affected ; then you have to umount /swshare from those servers and mount it from t3fs06:/swshare.

On t3fs06 stop the cron that ZFS sends /shome toward t3fs05.

Services

Basically just NFS

Backups

GitHub daily backups

Please check RemoteRepositoryBackupAdmin for details.
22th July 2013
root@t3fs05 $ crontab -l cmssgm      
# fetch backups of remote repositories
# for more details see https://wiki.chipp.ch/twiki/bin/view/CmsTier3/RemoteRepositoryBackupAdmin
37 2 * * * /swshare/rrbackup/bin/run_rrb.py

root@t3fs05 $ find /swshare/rrbackup/store/  
/swshare/rrbackup/store/
/swshare/rrbackup/store/peruzzi_ASAnalysis
/swshare/rrbackup/store/peruzzi_ASAnalysis/2013-07-22.tar.gz
/swshare/rrbackup/store/peruzzi_SCFootprintRemoval
/swshare/rrbackup/store/peruzzi_SCFootprintRemoval/2013-07-22.tar.gz
/swshare/rrbackup/store/dmeister_datareplica
/swshare/rrbackup/store/dmeister_datareplica/2013-07-22.tar.gz
/swshare/rrbackup/store/peruzzi_UserCode
/swshare/rrbackup/store/peruzzi_UserCode/2013-07-22.tar.gz
/swshare/rrbackup/store/peruzzi_PatchesOldReleases
/swshare/rrbackup/store/peruzzi_PatchesOldReleases/2013-07-22.tar.gz
/swshare/rrbackup/store/peruzzi_ASCore
/swshare/rrbackup/store/peruzzi_ASCore/2013-07-22.tar.gz

nightly ZFS send/receive toward t3fs06

The /shware snapshots are nightly taken and 'ZFS sent' into t3fs06 where are going to be kept for 10 days. Nagios will check on t3fs05 if a new snapshot was taken
root@t3fs05 $ zfs list
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
rpool                                        11.2G   446G    34K  /rpool
rpool/ROOT                                   6.22G   446G    21K  legacy
rpool/ROOT/s10x_u8wos_08a                    6.22G   446G  5.63G  /
rpool/ROOT/s10x_u8wos_08a@2011-Feb-18_11-51   140M      -  5.03G  -
rpool/ROOT/s10x_u8wos_08a@before_perl_CPAN   84.8M      -  5.19G  -
rpool/ROOT/s10x_u8wos_08a@31-May-2012        88.7M      -  5.34G  -
rpool/ROOT/s10x_u8wos_08a@09-Apr-2013        43.8M      -  5.48G  -
rpool/ROOT/s10x_u8wos_08a@05-Jun-2013        24.4M      -  5.54G  -
rpool/dump                                   1.00G   446G  1.00G  -
rpool/export                                   44K   446G    23K  /export
rpool/export/home                              21K   446G    21K  /export/home
rpool/swap                                      4G   450G  1.84M  -
shome2                                       4.97T  4.33T  53.5K  /shome2
shome2/shomebup                              4.97T  4.33T  4.92T  /shome2/shomebup
shome2/shomebup@auto2013-06-07_00:20:00      20.5G      -  4.86T  -
shome2/shomebup@auto2013-06-08_00:20:00       133M      -  4.85T  -
shome2/shomebup@auto2013-06-09_00:20:01       131M      -  4.86T  -
shome2/shomebup@auto2013-06-10_00:20:00       671M      -  4.86T  -
shome2/shomebup@auto2013-06-11_00:20:00       994M      -  4.87T  -
shome2/shomebup@auto2013-06-12_00:20:00      2.73G      -  4.89T  -
shome2/shomebup@auto2013-06-13_00:20:00      2.86G      -  4.91T  -
shome2/shomebup@auto2013-06-14_00:20:00          0      -  4.92T  -
swshare                                       367G  1.42T   358G  /swshare
swshare@auto2013-06-12_06:00:00              3.66G      -   357G  -   <--------------
swshare@auto2013-06-13_06:00:00              11.9M      -   357G  -   <--------------
swshare@auto2013-06-14_06:00:00              14.1M      -   359G  -   <--------------

Postgres backups from t3dcachedb

There is a bi-daily cron on t3dcachedb that takes a full backup of our dCache DBs and it will store it also into t3fs05:/swshare/dcache-postgres-backups/t3dcachedb
root@t3fs05 $ pwd
/swshare/dcache-postgres-backups
root@t3fs05 $ find .
.
./t3dcachedb03     <---------- ACTUAL PRODUCTION dCache DB
./t3dcachedb03/t3dcachedb03-dbbackup-20130612-1210.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130610-1210.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130609-1210.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130611-0010.bup.gz
./t3dcachedb03/dcache-db-backup.sh.cron.log
./t3dcachedb03/dcache-db-backup.sh
./t3dcachedb03/t3dcachedb03-dbbackup-20130614-1210.bup.gz
./t3dcachedb03/latest.bup
./t3dcachedb03/t3dcachedb03-dbbackup-20130613-0010.bup.gz
./t3dcachedb03/latest
./t3dcachedb03/t3dcachedb03-dbbackup-20130610-0010.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130612-0010.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130613-1210.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130614-0010.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130611-1210.bup.gz
./t3dcachedb03/t3dcachedb03-dbbackup-20130609-0010.bup.gz
./.ssh
./.ssh/id_rsa    <---------------- also present on t3dcachedb to enter by ssh/rsync from there          
./.ssh/id_rsa.pub
./.ssh/authorized_keys  <----- contains id_rsa.pub 
./t3dcachedb04
./t3dcachedb04/dcache-db-backup.sh
./t3dcachedb04/t3dcachedb04-dbbackup-20130114-0958.bup.gz
./t3dcachedb04/latest
./.Xauthority
./.bash_history
./t3dcachedb
./t3dcachedb02
./t3dcachedb02/latest
...

root@t3fs05 $ cat /swshare/dcache-postgres-backups/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEAyMhmLdWCE/ZoEJ6ooj+fPNUl3NrqadlutwGE7Vbc+GCTGKClAB6Cop710s9DFgSUSCr6t0utHtdbfZ51XCtcg5fM1+ual3bZXXQOaQFQ1aOP2dwbPM8ZHk6IGRGvrKbeT4Jxq3MxhvP61oYZhK4iwdcAGMlS627Z+B/pp2XpTIM= SSH keys to be used by postgres@t3dcachedb to rsync the DB backups

OLD Installation of the grid functionality for the WNs in the shared software area

Warning, important The installation is currently carried out as the restricted user cmssgm. Since Solaris nodes do not import currently the ldap, the cmssgm user and cms group are defined locally!!!!!!

We use the tar based WN installation as described here on the CERN Twiki.

The software is installed below /swhare/glite and the distribution tarballs are also located there. For the original installation we used

  • glite-WN-3.1.7-0.tar.gz, glite-WN-3.1.7-0-external.tar.gz

  1. Pull down the software from http://grid-deployment.web.cern.ch/grid-deployment/download/relocatable/glite-WN/SL5_x86_64/
  2. Create a directory for the new release
    mkdir /swshare/glite-WN-3.2.12-1
    chown cmssgm:cms /swshare/glite-WN-3.2.12-1
    
  3. Create the generic link
    rm  /swshare/glite
    ln -s glite-WN-3.2.12-1 /swshare/glite
    chown -h cmssgm:cms glite
    
  4. Unpack the two distribution files into the new directory

The configuration is carried out based on a special site-info-tarWN.def file. The file contains some necessary differences to our normal site-wide configuration file (mainly regarding some base paths), therefore we cannot use our standard one. This file can be found under /home/cmssgm/YAIM-config/ (templates for site-info files can be found in the upacked tarballs: /swshare/glite/glite/yaim/examples/siteinfo/site-info.def)

When upgrading, one should check whether the new site-info.def templates contain new options, and our old file should be merged with the new template!

First check the config file (-v flag)

# As user cmssgm from a Linux node that has mounted the /swshare area!!!!!!!!!!

# get the new site-info template, compare it with our previous one and make a new one
# notice that it will need to have INSTALL_ROOT and other variables set as described in the WN TAR docu.
cp /swshare/glite-WN-3.2.12-1/glite/yaim/examples/siteinfo/site-info.def site-info-3.2.12-1.def

cd /shome/cmssgm/YAIM-config
/swshare/glite/glite/yaim/bin/yaim -v -c -s site-info-tarWN.def -n WN_TAR
Note: To get at specific errors, you may need to invoke the script with leading bash -x. Note: I had to create empty wn-list.conf and groups.conf files for the WN_LIST and GROUPS_CONF options.

Run the YAIM configurator

/swshare/glite/glite/yaim/bin/yaim -v -c -s site-info-tarWN.def -n WN_TAR
   INFO: Using site configuration file: site-info-tarWN.def
   INFO:
         ###################################################################

         .             /'.-. ')
         .     yA,-"-,( ,m,:/ )   .oo.     oo    o      ooo  o.     .oo
         .    /      .-Y a  a Y-.     8. .8'    8'8.     8    8b   d'8
         .   /           ~ ~ /         8'    .8oo88.     8    8  8'  8
         . (_/         '===='          8    .8'     8.   8    8  Y   8
         .   Y,-''-,Yy,-.,/           o8o  o8o    o88o  o8o  o8o    o8o
         .    I_))_) I_))_)


         current working directory: /shome/cmssgm/YAIM-config
         site-info.def date: Mar 15 15:17 site-info-tarWN.def
         yaim command: -v -c -s site-info-tarWN.def -n WN_TAR
         log file: /swshare/glite/glite/yaim/bin/../log/yaimlog
         Thu Mar 15 15:17:39 CET 2012 : /swshare/glite/glite/yaim/bin/yaim

         Installed YAIM versions:
         glite-yaim-clients 4.0.13-1
         glite-yaim-core 4.0.14-1

         ####################################################################
   INFO: The default location of the grid-env.(c)sh files will be: /swshare/glite-WN-3.2.12-1/external/etc/profile.d
   INFO: Sourcing the utilities in /swshare/glite-WN-3.2.12-1/glite/yaim/functions/utils
   INFO: Detecting environment
   INFO: Executing function: config_globus_clients_check
   INFO: Executing function: config_lcgenv_check
   INFO: Executing function: config_rgma_client_check
   INFO: Executing function: config_amga_client_check
   INFO: Executing function: config_vomses_check
   INFO: Executing function: config_vomsdir_check
   INFO: Executing function: config_wn_check
   INFO: Executing function: config_wn_tar_check
   INFO: Executing function: config_wn_info_check
   INFO: Executing function: config_glite_saga2_check
   INFO: Executing function: config_globus_clients_setenv
   INFO: Executing function: config_globus_clients
   INFO: Configure the globus service
setup-tmpdirs: creating ./config.status
config.status: creating globus-script-initializer
config.status: creating Paths.pm
creating globus-sh-tools-vars.sh
creating globus-script-initializer
creating Globus::Core::Paths
checking globus-hostname
Done
   INFO: Executing function: config_lcgenv
   INFO: Executing function: config_rgma_client_setenv
   INFO: Executing function: config_rgma_client
   INFO: YAIM has detected the OS is SL5. The rgma client is no longer configured in SL5.
   INFO: Executing function: config_fts_client
   INFO: Executing function: config_amga_client_setenv
   INFO: Executing function: config_amga_client
   INFO: Executing function: config_vomses
   INFO: Executing function: config_vomsdir_setenv
   INFO: Executing function: config_vomsdir
   INFO: Create the /swshare/glite-WN-3.2.12-1/external/etc/grid-security/vomsdir/ directory
   INFO: Executing function: config_wn_setenv
   INFO: Executing function: config_wn
   INFO: Executing function: config_wn_tar_setenv
   INFO: Executing function: config_wn_tar
   INFO: Executing function: config_wn_info
   WARNING: No subcluster has been defined for the tarball in the WN_LIST file /shome/cmssgm/YAIM-config/wn-list.conf
   WARNING: YAIM will use the default subcluster id: CE_HOST -> my-ce.psi.ch
   INFO: Executing function: config_glite_saga2_setenv
   INFO: Executing function: config_glite_saga2
   INFO: Configuration Complete.                                               [  OK  ]
   INFO: YAIM terminated succesfully.

OLD Installing the CA certificates

Installing the CA certificates infrastructure according to the documentation:

Note: There used to be a problem with the installer! The cron job for keeping the certs and crls up to date is not correctly produced (We circumvent this problem by copying over the certificates from the SE as we do for the fileservers. See

OLD Adaptions to the user environment

Addition of the following lines to /swshare/glite/external/etc/profile.d/grid-env.sh

gridenv_setind      "X509_USER_PROXY" "${HOME}/.x509up_u$(id -u)"
gridenv_setind      "X509_CERT_DIR" "/swshare/glite/external/etc/grid-security/certificates"
gridenv_setind      "X509_VOMS_DIR" "/swshare/glite/external/etc/grid-security/vomsdir"

Note: We set the proxy certificate's location to within the shared home directory, so that a generated proxy is available on all machines without additional copying.

NodeTypeForm
Hostnames t3fs05
Services central SW NFS service
Hardware SUN X4500 (2*Opt 290, 16GB RAM, 48*500GB SATA)
Install Profile undefined-todo
Guarantee/maintenance until void

This topic: CmsTier3 > WebHome > AdminArea > NodeTypeOLDNFSServer
Topic revision: r30 - 2013-07-24 - DanielMeister
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback