How to access, set up, and test your account

Compulsory Initial Setups

All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome .

Information about the two T3 mailing-lists:

  • Subscribe to the cms-tier3-users@lists.psi.ch mailing list using its web interface (list archives). This mailing list is used to communicate information on Tier-3 matters like downtimes, news, upgrades, etc. and for discussions among users and admins.
  • To contact the CMS Tier-3 administrators write to cms-tier3@lists.psi.ch instead ; no subscription is needed for this mailing list.
  • Both lists are read by the administrators and are archived. Mails addressed to the cms-tier3-users@lists.psi.ch are read by everyone so they could get answered better and sooner, especially if you ask about specific CMS software ( CRAB3, CMSSW, Xrootd, ... )

T3 policies

Read and respect the Tier3Policies

Linux groups

Each T3 user belongs to both a primary group and a common secondary group cms, the former is meant to classify common files like the ones downloaded by the PhEDEx file transfer service. T3 primary groups are :
ETHZ UniZ PSI
ethz-ecal uniz-higgs psi-bphys
ethz-bphys uniz-pixel psi-pixel
ethz-ewk uniz-bphys  
ethz-higgs    
ethz-susy    

For instance this is the primary and the secondary group of a generic T3 account :

$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
The T3 groups areas: /pnfs/psi.ch/cms/trivcat/store/t3groups

First Steps on T3 User Interfaces (UI)

Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission:

Access to Login nodes is based on the institution

The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.

UI Login node for institution HW specs
t3ui01.psi.ch ETHZ, PSI 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui02.psi.ch All 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui03.psi.ch UNIZ 132GB RAM , 72 CPUs core (HT), 5TB /scratch

  1. Login into your t3ui0* server by ssh ; use -Y or -X flag for working with X applications: ssh -Y username@t3ui02.psi.ch
  2. If you are an external PSI user ( ETHZ, UniZ, ... ) modify the initial password ASAP from your UI with passwd command.
  3. Copy your grid credentials to ~/.globus/userkey.pem and ~/.globus/usercert.pem and make sure that their permissions are properly set like :
    chmod 400 userkey.pem
    chmod 400 usercert.pem
    
    For details about how to extract those .pem files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate.
  4. Source the grid environment associated to your login shell:
    source /swshare/psit3/etc/profile.d/cms_ui_env.sh   # for bash
    source /swshare/psit3/etc/profile.d/cms_ui_env.csh  # for tcsh
    
    In order to automatically load the grid environment, for instance for BASH shell, you might add into your ~/.bash_profile file :
    [ `echo $HOSTNAME | grep t3ui` ] && [ -r /swshare/psit3/etc/profile.d/cms_ui_env.sh ] && source /swshare/psit3/etc/profile.d/cms_ui_env.sh && echo "UI features enabled" 
  5. Run env|sort and verify that /swshare/psit3/etc/profile.d/cms_ui_env.{sh,csh} has properly activated the setting
    X509_USER_PROXY=/t3home/$(id -un)/.x509up_u$(id -u)"
    ; that setting is crucial to access a CMS Grid SE from your T3 jobs.
  6. You must be registered to CMS "Virtual Organization" CERN details about that.
  7. Create a proxy certificate for CMS by:
    voms-proxy-init -voms cms
    
    If the command voms-proxy-init -voms cms fails then run the command with -debug flag to troubleshoot the problem.
  8. Test your basic access to the PSI Storage element using our test-dCacheProtocols command
    $ test-dCacheProtocols
    Test directory: /tmp/dcachetest-20190215-1649-89361
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-ls ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    
    • NOTE 1: sometimes the XROOTD-WAN-* tests might get stuck due to exessive I/O traffic over the WAN. Try again.
    • NOTE 2: You can use the -v (verbose) flag to see the commands that the script executes.
  9. Test write access to your user area on the storage element. The user area is located underneath /pnfs/psi.ch/cms/trivcat/store/user and has your login name as directory name, so
    $ test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/$(id -nu)
    Test directory: /tmp/dcachetest-20190215-1654-89843
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-ls ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    
  10. The test-dCacheProtocols tool can be also addressed vs a remote storage element (use the -h flag to get more info about it). Since we are executing the test not locally at CSCS we need to ignore all the tests that only work for local LAN ( -i "DCAP-read XROOTD-LAN-write XROOTD-WAN-write" ) ; e.g. to check the CSCS storage element storage01.lcg.cscs.ch :
    $ test-dCacheProtocols -s storage01.lcg.cscs.ch -x storage01.lcg.cscs.ch -l /pnfs/lcg.cscs.ch/cms/trivcat/store/user/martinel -i "DCAP-read XROOTD-LAN-write XROOTD-WAN-write"
    Test directory: /tmp/dcachetest-20150529-1545-16302
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [IGNORE]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [IGNORE]
    TEST: XROOTD-LAN-ls ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-LAN-read ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-LAN-rm ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-WAN-write ......  [IGNORE]
    TEST: XROOTD-WAN-ls ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    TEST: XROOTD-WAN-read ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    TEST: XROOTD-WAN-rm ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    

Backup policies

Your = /t3home and /work= files are backuped daily. Recovering a file details are here HowToRetrieveBackupFiles. There are NO backups of /tmp /scratch /pnfs , so pay attention there!

Optional Initial Setups

Installing the CERN CA files into your Web Browser

Install in your Web Browser any CERN CA file, conversely your Web Browser might constantly bother you about all the CERN https:// URLs ; typically the Web Browsers feature many well known CA files by default but not the CERN CA files.

Applying for the VOMS Group /cms/chcms membership

It's available a dedicated 'Swiss' VOMS Group called /cms/chcms in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely :
  • higher priority on the T2_CH_CSCS batch queues
  • additional Jobs slots on the T2_CH_CSCS batch queues
  • additional /pnfs space inside the T2_CH_CSCS grid storage
  • during 2017, a group area like the T3 groups areas /pnfs/psi.ch/cms/trivcat/store/t3groups/

When a user belongs to the /cms/chcms group, and he runs voms-proxy-init --voms cms, the voms-proxy-info --all will report the new /cms/chcms/Role=NULL/Capability=NULL attribute, like :

$ voms-proxy-info --all | grep /cms
attribute : /cms/Role=NULL/Capability=NULL
attribute : /cms/chcms/Role=NULL/Capability=NULL

To apply for the /cms/chcms membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the /cms/chcms membership ; be aware that the port :8443 might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )

Saving the UIs SSH pub host keys

Hackers are constantly waiting for a user mistake, even a simple misspelled letter like in this case occurred in 2015 :
$ ssh t3ui02.psi.sh
The authenticity of host 't3ui02.psi.sh (62.210.217.195)' can't be established.
RSA key fingerprint is c0:c5:af:36:4b:2d:1f:88:0d:f3:9c:08:cc:87:df:42.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 't3ui02.psi.sh,62.210.217.195' (RSA) to the list of known hosts.
at3user@t3ui02.psi.sh's password:
The T3 Admins can't prevent a T3 user from confusing a .ch with a .sh so pay attention to these cases ! To avoid mistaking the T3 hostnames you can define the following aliases in your shell :

$ grep alias ~/.bash_profile | grep t3ui
alias ui01="ssh -X $USER@t3ui01.psi.ch"
alias ui02="ssh -X $USER@t3ui02.psi.ch"
alias ui03="ssh -X $USER@t3ui03.psi.ch"


Another hackers attack is the SSH man in the middle attack ; to prevent it proactively save in /$HOME/.ssh/known_hosts each t3ui0* SSH RSA public key by running these commands on each of your daily laptop/PC/server ( also on lxplus ! ) :

cp -p /$HOME/.ssh/known_hosts /$HOME/.ssh/known_hosts.`date +"%d-%m-%Y"`
mkdir /tmp/t3ssh/
for X in 01 02 03 ; do TMPFILE=`mktemp /tmp/t3ssh/XXXXXX` && ssh-keyscan -t rsa  t3ui$X.psi.ch,t3ui$X,`host t3ui$X.psi.ch| awk '{ print $4}'` | cat - /$HOME/.ssh/known_hosts | grep -v 'psi\.sh'  > $TMPFILE && mv $TMPFILE /$HOME/.ssh/known_hosts ; done
rm -rf /tmp/t3ssh
for X in 01 02 03 ; do echo -n "# entries for t3ui$X = " ; grep -c t3ui$X /$HOME/.ssh/known_hosts  ; grep -Hn --color t3ui$X /$HOME/.ssh/known_hosts ; echo ;  done
echo done
the last for reports if there are duplicated rows in /$HOME/.ssh/known_hosts for a t3ui0* server ; and if there are then you've to preserve the correct occurrence and delete the others ; to delete you can either use sed -i or an editor like vim / emacs / nano / nedit ; once you'll get just one row per t3ui0* server run this command and carefully compare your output with this output:

$ ssh-keygen -l -f /$HOME/.ssh/known_hosts | grep t3ui 
2048 SHA256:0Z8Su5R4aZthbePGMM14mEKxYFOuKyrnUe9GjU0m6vM t3ui01.psi.ch,192.33.123.23 (RSA)
2048 SHA256:2qA9YDNeOEbGYjIdpRdBJpywQDne5gRbRvN/myL5P8o t3ui02.psi.ch,192.33.123.29 (RSA)
2048 SHA256:SoIL0H0ueyASNkyYID3a16AIHuAEP7AQ5iaQ6vrvzfk t3ui03.psi.ch,192.33.123.85 (RSA)

modify your client /$HOME/.ssh/config in order to force the ssh command to always check if the server you're connecting to is already reported in the /$HOME/.ssh/known_hosts file and to ask for your 'OK' for all the servers that are missing :

StrictHostKeychecking ask
your /$HOME/.ssh/config can be more complex than just that line, study the ssh_config man page or contact the T3 Admins; ideally you should put StrictHostKeychecking yes but in real life that's impractical.

now your ssh client will be able to detect the SSH man in the middle attacks and if so it will report :

  WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! 
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! 
Someone could be eavesdropping on you right now (man-in-the-middle attack)! 
It is also possible that the RSA host key has just been changed.
The t3ui0* SSH RSA public/private keys will never change, so the case It is also possible that the RSA host key has just been changed will actually never occurs.

Creating an AFS CERN Ticket

To access the CERN /afs protected dirs ( e.g. your CERN home on AFS ) you'll need to create a ticket from CERN AFS :
kinit ${Your_CERN_Username}@CERN.CH
aklog cern.ch
The first command will provide you a Kerberos ticket while the second command will use the Kerberos ticket to obtain an authentication token from CERN's AFS service
Edit | Attach | Watch | Print version | History: r80 | r66 < r65 < r64 < r63 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r64 - 2019-02-28 - NinaLoktionova
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback