How to access, set up, and test your account

Preliminary steps

All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Please take a look at it to explore T3's capabilities.

Information about T3 mailing-lists:

  • Please subscribe to the cms-tier3-users list using this web interface (list archives): cms-tier3-users@lists.psi.ch. This mailing-list is used to communicate information on Tier-3 matters (downtimes, etc.) or for discussions among users and admins,
  • To contact the CMS Tier-3 administrators alone, write to: cms-tier3@lists.psi.ch
  • NOTE: Both lists are read by the administrators and are archived. You can submit support requests to either of them. Mails to the user list have the added advantage that they can be read by everyone.

Please also take a look at our policies about usage, quotas etc.. here: Tier3Policies .

T3 Linux Groups

The T3 users are partitioned in the following Linux groups:
  • ethz-ecal | ethz-bphys | ethz-ewk | ethz-higgs | ethz-susy | psi-bphys | psi-pixel | uniz-phys | uniz-higgs | uniz-pixel
This partitioning:
  • allows a faster and simpler understanding of the /pnfs space usage by leveraging on the gid setting: in the past having all the files assigned to cms always made this kind of accounting very tricky in dCache.
  • allows to make accounting in the batch system also according to the new gid setting.
  • allows the creation of dedicated "group dirs" in /pnfs where only your group can operate.
  • as a side effect, each new file written in /shome, /pnfs, /scratch or /tmp are written with the new gid setting and this prevents users from an other group to accidentaly delete your group writeable files/dirs.

the former global primary group cms is now a secondary group; e.g. the following account belongs to Linux group ethz-higgs:

$ id dmeister
uid=571(dmeister) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)

Following an output that points out how the /pnfs dirs are assigned according to the partitioning:

$ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms 
total 56
drwxr-xr-x    2 alschmid     uniz-bphys 512 Feb 21  2013 alschmid
drwxr-xr-x    5 amarini      ethz-ewk   512 Nov  7 15:37 amarini
drwxr-xr-x   18 andis        ethz-bphys 512 Jan  5  2010 andis
drwxr-xr-x    2 arizzi       ethz-bphys 512 Sep 16 17:49 arizzi
drwxr-xr-x    5 bean         psi-bphys  512 Aug 24  2010 bean
drwxr-xr-x    5 bianchi      ethz-higgs 512 Sep  9 09:40 bianchi
drwxr-xr-x   30 bmillanm     uniz-bphys 512 Jan 17  2012 bmillanm
drwxr-xr-x   29 bortigno     ethz-higgs 512 Apr 18  2013 bortigno
drwxr-xr-x   98 buchmann     ethz-susy  512 Nov  5 20:36 buchmann
...

Current groups dirs in pnfs, each user in his/her group can create a subdir in the group dir and store the group files there:

 $ ls -l /pnfs/psi.ch/cms/trivcat/store/t3groups/
total 5
drwxrwxr-x 2 root ethz-bphys 512 Nov  8 15:18 ethz-bphys
drwxrwxr-x 2 root ethz-ecal  512 Nov  8 15:18 ethz-ecal
drwxrwxr-x 2 root ethz-ewk   512 Nov  8 15:18 ethz-ewk
drwxrwxr-x 2 root ethz-higgs 512 Nov  8 15:18 ethz-higgs
drwxrwxr-x 2 root ethz-susy  512 Nov  8 15:18 ethz-susy
drwxrwxr-x 2 root psi-bphys  512 Nov  8 15:18 psi-bphys
drwxrwxr-x 2 root psi-pixel  512 Nov  8 15:18 psi-pixel
drwxrwxr-x 2 root uniz-bphys 512 Nov  8 15:18 uniz-bphys
drwxrwxr-x 2 root uniz-higgs 512 Nov  8 15:18 uniz-higgs
drwxrwxr-x 2 root uniz-pixel 512 Nov  8 15:18 uniz-pixel

If your account hasn't a Linux group assigned please write us.

Basic setup

We offer the following user interfaces. Please note that there is a convention on which users should access which UIs.

Access to Login nodes is based on the institution

The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.

UI Login node for institution HW specs
t3ui01.psi.ch ETHZ, PSI 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui02.psi.ch All 132GB RAM , 72 CPUs core (HT), 5TB /scratch
t3ui03.psi.ch UNIZ 132GB RAM , 72 CPUs core (HT), 5TB /scratch

  1. Try to log in to the User Interface machine (-Y or -X flag for working with X applications. You can also try to connect with NX client, which allows an to work efficiently with graphical applications)
    ssh -Y username@t3ui02.psi.ch
    
  2. If you are an external user and do not have a standard PSI account, you need to change your password the first time you log in. Please use the standard passwd utility.
  3. Copy your grid credentials to the standard places, i.e. to ~/.globus/userkey.pem and ~/.globus/usercert.pem and make sure that the permissions are set correctly like in this output:
    -rw-r--r--  1 feichtinger cms 2961 Mar 17  2008 usercert.pem
    -rw-------  1 feichtinger cms 1917 Mar 17  2008 userkey.pem
    
  4. source a grid environment
    source /swshare/psit3/etc/profile.d/cms_ui_env.sh   # for bash
    source /swshare/psit3/etc/profile.d/cms_ui_env.csh   # for tcsh
    
  5. Try to create a proxy certificate for CMS
    voms-proxy-init -voms cms
    
    If this fails, run the command with an additional -debug flag, and the error message will usually be sufficient for us to point out the problem. Make sure that your certificate is signed up with CMS as in these instructions from our Tier-2 wiki.
  6. Test your access to the PSI Storage element with the test-dCacheProtocols command. You should see output like this (no failed tests):
    $ test-dCacheProtocols -n t3se01.psi.ch -p /pnfs/psi.ch/cms/testing -i "GSIDCAP-write SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
    Test directory: /tmp/dcachetest-20131118-1202-1970
    TEST: GSIDCAP-write ......  [IGNORE]
    TEST: SRMv1-adv-del ......  [IGNORE]
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [OK]
    TEST: SRMv1-adv-del1 ......  [IGNORE]
    TEST: SRMv1-write ......  [IGNORE]
    TEST: SRMv1-get-meta ......  [IGNORE]
    TEST: SRMv1-read ......  [IGNORE]
    TEST: SRMv1-adv-del2 ......  [IGNORE]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    
  7. Note: The test-dCacheProtocols tool can also be used to test a remote element access (use the -h flag to get more info about it): e.g. to test CSCS:
    $ test-dCacheProtocols -n storage01.lcg.cscs.ch -p /pnfs/lcg.cscs.ch/cms/testing -i "GSIDCAP-write DCAP-read SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
    Test directory: /tmp/dcachetest-20131118-1201-1784
    TEST: GSIDCAP-write ......  [IGNORE]
    TEST: SRMv1-adv-del ......  [IGNORE]
    TEST: GFTP-write ......  [OK]
    TEST: GFTP-ls ......  [OK]
    TEST: GFTP-read ......  [OK]
    TEST: DCAP-read ......  [IGNORE]
    TEST: SRMv1-adv-del1 ......  [IGNORE]
    TEST: SRMv1-write ......  [IGNORE]
    TEST: SRMv1-get-meta ......  [IGNORE]
    TEST: SRMv1-read ......  [IGNORE]
    TEST: SRMv1-adv-del2 ......  [IGNORE]
    TEST: SRMv2-write ......  [OK]
    TEST: SRMv2-ls ......  [OK]
    TEST: SRMv2-read ......  [OK]
    TEST: SRMv2-rm ......  [OK]
    

Changing your login shell

If later you need to change your default login shell then please ask that to the t3 admin list.

AFS CERN Ticket

You should use the following sequence of commands at T3:
kinit ${Your_CERN_Username}@CERN.CH
aklog cern.ch
The first command gets you a kerberos ticket, the second command uses that ticket to obtain an authentication token from CERN's AFS service
Edit | Attach | Watch | Print version | History: r80 | r21 < r20 < r19 < r18 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r19 - 2013-11-18 - FabioMartinelli
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback