How to access, set up, and test your account

Mailing lists and communication with admins and other users

  • cms-tier3-users@lists.psi.ch: list through which we broadcast information (e.g about downtimes). It can also be used for discussions among users (e.g. getting help from other users). You must subscribe to this list using its web interface (list archives).
  • cms-tier3@lists.psi.ch: Use this list to reach the Tier-3 admins, typically if you have a problem and you need help. What you write to this list is only seen by the administrators

Both lists are read by the administrators and are archived.

First Steps on T3 User Interfaces (UI)

Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission:

Access to Login nodes is based on the institution

The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.

UI Login node for institution HW specs
t3ui01.psi.ch ETHZ, PSI 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui02.psi.ch All 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui03.psi.ch UNIZ 132GB RAM , 72 CPUs core (HT), 5TB /scratch

  1. Use ssh to log in to a UI server t3ui0* server. You can use -Y or -X flag if you want to work with graphical X applications.
       ssh -Y !username@t3ui02.psi.ch=
       
  2. *If you are an ETHZ or UniZ user and do not have a regular PSI account, you will have to change your initial password after logging in for the first time. Modify the initial password by using the passwd command.
  3. In order to work with resources on the WLCG grid you need to have a grid y509 certificate and a matching private key. Copy these credentials to the standard locations of ~/.globus/userkey.pem and ~/.globus/usercert.pem and make sure that their permissions are properly set. The user key must NOT be readable by any other user!
    chmod 600 userkey.pem
    chmod 644 usercert.pem
    
    For details about how to extract those .pem files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate.
  4. Make sure that your credentials are registered with the CMS Virtual Organization CERN details about that. Else, the next step will fail.
  5. Create you short term credentials in the form of a proxy certificate with CMS extensions (valid for 168 hours):
    voms-proxy-init -voms cms --valid 168:00
    If the command fails you can run it again adding a -debug flag to troubleshoot the problem.
  6. Test your access to the PSI Storage element using our test-dCacheProtocols testing suite
    $ test-dCacheProtocols
    [feichtinger@t3ui01 ~]$ ./test-dCacheProtocols
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-stat ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    
    • If a test fails, an error message will be written to the screen, and it will point you to a file containing the details of the error. Please send this together with all the information to cms-tier3@lists.psi.ch.
    • TIP: You can use the -v (verbose) flag to see the commands that the script executes. This is a good way to learn about the slightly esoteric syntax for interacting with grid storage. If you supply a -d flag as well, the tests will not be run, but you will be able to look at all the actions that the script would execute.
  7. Test write access to your user area on the storage element. The user area is located underneath /pnfs/psi.ch/cms/trivcat/store/user and has by convention your cms hypernews name name as directory name. However, due to historic procedures, it might also be that your Tier-3 login name is used for this directory /pnfs/psi.ch/cms/trivcat/store/user/${your_cms_name}. E.g.
    test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/feichtinger
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: XROOTD-LAN-write ......  [OK]
    TEST: XROOTD-LAN-stat ......  [OK]
    TEST: XROOTD-LAN-read ......  [OK]
    TEST: XROOTD-LAN-rm ......  [OK]
    TEST: XROOTD-WAN-write ......  [OK]
    TEST: XROOTD-WAN-read ......  [OK]
    TEST: XROOTD-WAN-rm ......  [OK]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    
  8. For setting up the CMS software environment
    export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch/
    source ${VO_CMS_SW_DIR}/cmsset_default.sh
    

Understanding Tier-3 storage

The Tier-3 offers different kinds of storage and it is important that you understand how to use them to their best advantage.

  • /t3home/${USER}: This is your home directory. It is relatively small (10 GB), but it is backed up daily. You should use is for your code, important documents, configuration files, etc. You should not use it for high I/O operations, since this file system is shared on all nodes between all users, it can easily get overloaded by I/O requests, which will typically result in delays (e.g. an ls will block for a few seconds before you get the output).
  • /scratch${USER}: Each node, whether worker node or user interface, has a /scratch area. This is where you should perform tasks requiring intensive I/O operations . Your batch jobs should produce files in this area on the loca node, and only at the end of the job move the whole file to the final target.
  • /work: This is another shared file system which offers more space. It is implemented through a single storage server, so again, you should not use it for intensive I/O operations
  • /pnfs (the Storage Element): The SE is the main large storage that you can use. It can be accesses in a number of different ways, each by a different protocol. The SEs allow you to transfer large files between sites, but they also provide efficient file access for analysis jobs. The test-dCacheProtocols script above tests many of them.
    • NFS4.1: NEW in June 2020. You can now directly access the SE like a normal file system under the path /pnfs/psi.ch/cms/trivcat/store/user/${USER}. On the user interfaces the filesystem in mounted in read/write mode, so you can copy files into your area and create new ones. This is what you want to use if you want to read numpy datasets directly via native python from the SE. On the worker nodes the file system can be accessed in read only mode. Do not run commands like du or find on this file system: The area is very large and contains millions of files. Running such commands can take an hour and has an impact on the system.
      • NOTE: even though it feels like a normal filesystem that can be reached just via a standard path, the underlying storage is not fully POSIX compliant. One of the most pronounced differences is that files are immutable, i.e. you can not modify files once they have been created. You cannot open them in "append" mode. But you can delete the file and then replace it by one of the same name.
    • gsiftp, SRM, xroot: These protocols and their associated shell commands (globus-url-copy, gfal*, xrdcp) are useful for copying whole files between sites.
    • xroot, dcap: these protocols allow efficient random access to the files. This is what you can typically use from ROOT. But your application needs to provide support for these protocols. If this is not the case, e.g. when you want to analyze numpy files with native python, then you should use NFS4.1 (which conceptually is the easiest to use, since it mostly feels and behaves like a normal filesystem).

Backup policies

  • /t3home: Files are back-upped daily and are available through snapshots
  • /work: Files are not back-upped. But they reside on a high quality storage with some redundancy, and snapshots are available.
  • Storage element (/pnfs): Files are not backed up. But they reside on a high quality storage with some reduncancy.

Recovering files from snapshots for /t3home and /work is lined out in this article.
Attention: there are NO backups of /scratch,  /pnfs

T3 policies

Please read and take note of our Policies

Linux groups (partially OBSOLETE - needs to be revised)

Each T3 user belongs to both a primary group and a common secondary group cms, the former is meant to classify common files like the ones downloaded by the PhEDEx file transfer service. T3 primary groups are :
ETHZ UniZ PSI
ethz-ecal uniz-higgs psi-bphys
ethz-bphys uniz-pixel psi-pixel
ethz-ewk uniz-bphys  
ethz-higgs    
ethz-susy    

For instance this is the primary and the secondary group of a generic T3 account :

$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
The T3 groups areas: /pnfs/psi.ch/cms/trivcat/store/t3groups

Optional Initial Setups

local Anaconda/Conda installation

One might do following steps to add anaconda:
  • Only once:
cd /work/${USER}/
wget  https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh -b -p ./miniconda3
rm Miniconda3-latest-Linux-x86_64.sh
  • Every time when using this conda environment:
export PATH=${PWD}/miniconda3/bin:${PATH} or export PATH=/work/${USER}/miniconda3/bin:${PATH}

Installing the CERN CA files into your Web Browser

Install in your Web Browser any CERN CA file, conversely your Web Browser might constantly bother you about all the CERN https:// URLs ; typically the Web Browsers feature many well known CA files by default but not the CERN CA files.

Applying for the VOMS Group /cms/chcms membership

It's available a dedicated 'Swiss' VOMS Group called /cms/chcms in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely :
  • higher priority on the T2_CH_CSCS batch queues
  • additional Jobs slots on the T2_CH_CSCS batch queues
  • additional /pnfs space inside the T2_CH_CSCS grid storage
  • during 2017, a group area like the T3 groups areas /pnfs/psi.ch/cms/trivcat/store/t3groups/

When a user belongs to the /cms/chcms group, and he runs voms-proxy-init --voms cms, the voms-proxy-info --all will report the new /cms/chcms/Role=NULL/Capability=NULL attribute, like :

$ voms-proxy-info --all | grep /cms
attribute : /cms/Role=NULL/Capability=NULL
attribute : /cms/chcms/Role=NULL/Capability=NULL

To apply for the /cms/chcms membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the /cms/chcms membership ; be aware that the port :8443 might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )

Edit | Attach | Watch | Print version | History: r80 | r73 < r72 < r71 < r70 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r71 - 2020-06-06 - DerekFeichtinger
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback