Tags:
create new tag
view all tags

How to access, set up, and test your account

Compulsory Initial Setups

All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome .

Information about the two T3 mailing-lists:

  • Subscribe to the cms-tier3-users@lists.psi.ch mailing list using its web interface (list archives). This mailing list is used to communicate information on Tier-3 matters like downtimes, news, upgrades, etc. and for discussions among users and admins.
  • To contact the CMS Tier-3 administrators write to cms-tier3@lists.psi.ch instead ; no subscription is needed for this mailing list.
  • Both lists are read by the administrators and are archived. Mails addressed to the cms-tier3-users@lists.psi.ch are read by everyone so they could get answered better and sooner, especially if you ask about specific CMS software ( CRAB3, CMSSW, Xrootd, ... )

T3 policies

Read and respect the Tier3Policies

Linux groups

Each T3 user belongs to both a primary group and a common secondary group cms, the former is meant to classify common files like the ones downloaded by the PhEDEx file transfer service. T3 primary groups are :
ETHZ UniZ PSI
ethz-ecal uniz-higgs psi-bphys
ethz-bphys uniz-pixel psi-pixel
ethz-ewk uniz-bphys  
ethz-higgs    
ethz-susy    

For instance this is the primary and the secondary group of a generic T3 account :

$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
The T3 groups areas: /pnfs/psi.ch/cms/trivcat/store/t3groups

First Steps on T3 User Interfaces (UI)

Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission:

OS UI Hostname users group Notes
SL6 t3ui01 PSI 132GB RAM, 72cores, 4TB /scratch
SL6 t3ui02 ETHZ 132GB RAM, 72cores, 4TB /scratch
SL6 t3ui03 UNIZ 132GB RAM, 72cores, 4TB /scratch

  1. Login into your t3ui0* server by ssh ; use -Y or -X flag for working with X applications: ssh -Y username@t3ui02.psi.ch
  2. If you are an external PSI user ( ETHZ, UniZ, ... ) modify the initial password ASAP from your UI with passwd command.
  3. Copy your grid credentials to ~/.globus/userkey.pem and ~/.globus/usercert.pem and make sure that their permissions are properly set like :
    chmod 400 userkey.pem
    chmod 400 usercert.pem
    
    For details about how to extract those .pem files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate.
  4. Source the grid environment associated to your login shell:
    source /swshare/psit3/etc/profile.d/cms_ui_env.sh   # for bash
    source /swshare/psit3/etc/profile.d/cms_ui_env.csh  # for tcsh
    
    In order to automatically load the grid environment, for instance for BASH shell, you might add into your ~/.bash_profile file :
    [ `echo $HOSTNAME | grep t3ui` ] && [ -r /swshare/psit3/etc/profile.d/cms_ui_env.sh ] && source /swshare/psit3/etc/profile.d/cms_ui_env.sh && echo "UI features enabled" 
  5. Run env|sort and verify that /swshare/psit3/etc/profile.d/cms_ui_env.{sh,csh} has properly activated the setting
    X509_USER_PROXY=/t3home/$(id -un)/.x509up_u$(id -u)"
    ; that setting is crucial to access a CMS Grid SE from your T3 jobs.
  6. You must be registered to CMS "Virtual Organization" CERN details about that.
  7. Create a proxy certificate for CMS by:
    voms-proxy-init -voms cms
    
    If the command voms-proxy-init -voms cms fails then run the command with -debug flag to troubleshoot the problem.
  8. Test your basic access to the PSI Storage element using our test-dCacheProtocols command
    $ test-dCacheProtocols
    Test directory: /tmp/dcachetest-20190215-1649-89361
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-ls ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    
    • NOTE 1: sometimes the XROOTD-WAN-* tests might get stuck due to exessive I/O traffic over the WAN. Try again.
    • NOTE 2: You can use the -v (verbose) flag to see the commands that the script executes.
  9. Test write access to your user area on the storage element. The user area is located underneath /pnfs/psi.ch/cms/trivcat/store/user and has your login name as directory name, so
    $ test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/$(id -nu)
    Test directory: /tmp/dcachetest-20190215-1654-89843
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-ls ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    
  10. The test-dCacheProtocols tool can be also addressed vs a remote storage element (use the -h flag to get more info about it). Since we are executing the test not locally at CSCS we need to ignore all the tests that only work for local LAN ( -i "DCAP-read XROOTD-LAN-write XROOTD-WAN-write" ) ; e.g. to check the CSCS storage element storage01.lcg.cscs.ch :
    $ test-dCacheProtocols -s storage01.lcg.cscs.ch -x storage01.lcg.cscs.ch -l /pnfs/lcg.cscs.ch/cms/trivcat/store/user/martinel -i "DCAP-read XROOTD-LAN-write XROOTD-WAN-write"
    Test directory: /tmp/dcachetest-20150529-1545-16302
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [IGNORE]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    TEST: XROOTD-LAN-write ......  [IGNORE]
    TEST: XROOTD-LAN-ls ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-LAN-read ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-LAN-rm ......  [SKIPPED] (dependencies did not run:  XROOTD-LAN-write)
    TEST: XROOTD-WAN-write ......  [IGNORE]
    TEST: XROOTD-WAN-ls ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    TEST: XROOTD-WAN-read ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    TEST: XROOTD-WAN-rm ......  [SKIPPED] (dependencies did not run:  XROOTD-WAN-write)
    

Backup policies

Your /t3home and /work files are backuped daily.
Recovering a file details are here HowToRetrieveBackupFiles.
There are NO backups of /tmp /scratch /pnfs , so pay attention there!

Optional Initial Setups

Installing the CERN CA files into your Web Browser

Install in your Web Browser any CERN CA file, conversely your Web Browser might constantly bother you about all the CERN https:// URLs ; typically the Web Browsers feature many well known CA files by default but not the CERN CA files.

Applying for the VOMS Group /cms/chcms membership

It's available a dedicated 'Swiss' VOMS Group called /cms/chcms in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely :
  • higher priority on the T2_CH_CSCS batch queues
  • additional Jobs slots on the T2_CH_CSCS batch queues
  • additional /pnfs space inside the T2_CH_CSCS grid storage
  • during 2017, a group area like the T3 groups areas /pnfs/psi.ch/cms/trivcat/store/t3groups/

When a user belongs to the /cms/chcms group, and he runs voms-proxy-init --voms cms, the voms-proxy-info --all will report the new /cms/chcms/Role=NULL/Capability=NULL attribute, like :

$ voms-proxy-info --all | grep /cms
attribute : /cms/Role=NULL/Capability=NULL
attribute : /cms/chcms/Role=NULL/Capability=NULL

To apply for the /cms/chcms membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the /cms/chcms membership ; be aware that the port :8443 might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )

Edit | Attach | Watch | Print version | History: r66 < r65 < r64 < r63 < r62 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r66 - 2019-04-16 - NinaLoktionova
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback