How to access, set up, and test your account
Preliminary steps
All the documentation is maintained in the T3 twiki pages:
https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Please take a look at it to explore T3's capabilities.
Information about T3 mailing-lists:
- Please subscribe to the cms-tier3-users list using this web interface (list archives):
cms-tier3-users@lists.psi.ch
. This mailing-list is used to communicate information on Tier-3 matters (downtimes, etc.) or for discussions among users and admins.
- To contact the CMS Tier-3 administrators alone, write to:
cms-tier3@lists.psi.ch
- NOTE: Both lists are read by the administrators and are archived. You can submit support requests to either of them. Mails to the user list have the added advantage that they can be read by everyone.
Please also take a look at our policies about usages, quotas, etc..
Tier3Policies.
T3 Linux Groups
All the T3 users are partitioned in Linux groups; the former primary group
cms is now a secondary group that still groups all the users and it's used for common files like the ones uploaded at T3 by
PhEDEx:
ETHZ |
UniZ |
PSI |
ethz-ewk |
uniz-bphys |
|
ethz-higgs |
|
|
ethz-susy |
|
|
ethz-ecal |
uniz-higgs |
psi-bphys |
ethz-bphys |
uniz-pixel |
psi-pixel |
The Linux groups:
- allows a faster and simpler understanding of the
/pnfs
space usage by leveraging on the gid
setting: in the past having all the files assigned to the unique group cms
prevented a simple group accounting in dCache.
- allows to make accounting in the Sun Grid Engine batch system also according to the
gid
setting.
- allowed the creation of a set of dedicated "group dirs"
/pnfs/psi.ch/cms/trivcat/store/t3groups/
where just the group members can write or delete.
- also simplifies the
/shome
, /pnfs
, /scratch
or /tmp
file/dirs accounting.
Following an ETHZ user example with his primary and his secondary group:
$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
Following the users dirs
/pnfs/psi.ch/cms/trivcat/store/user/
:
$ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms
total 56
drwxr-xr-x 2 alschmid uniz-bphys 512 Feb 21 2013 alschmid
drwxr-xr-x 5 amarini ethz-ewk 512 Nov 7 15:37 amarini
drwxr-xr-x 2 arizzi ethz-bphys 512 Sep 16 17:49 arizzi
drwxr-xr-x 5 bean psi-bphys 512 Aug 24 2010 bean
drwxr-xr-x 5 bianchi ethz-higgs 512 Sep 9 09:40 bianchi
drwxr-xr-x 98 buchmann ethz-susy 512 Nov 5 20:36 buchmann
...
Following the dedicated "group dirs"
/pnfs/psi.ch/cms/trivcat/store/t3groups/
; since they're "group dirs" they belong to
root
but the group has the
w
permission:
$ ls -l /pnfs/psi.ch/cms/trivcat/store/t3groups/
total 5
drwxrwxr-x 2 root ethz-bphys 512 Nov 8 15:18 ethz-bphys
drwxrwxr-x 2 root ethz-ecal 512 Nov 8 15:18 ethz-ecal
drwxrwxr-x 2 root ethz-ewk 512 Nov 8 15:18 ethz-ewk
drwxrwxr-x 2 root ethz-higgs 512 Nov 8 15:18 ethz-higgs
drwxrwxr-x 2 root ethz-susy 512 Nov 8 15:18 ethz-susy
drwxrwxr-x 2 root psi-bphys 512 Nov 8 15:18 psi-bphys
drwxrwxr-x 2 root psi-pixel 512 Nov 8 15:18 psi-pixel
drwxrwxr-x 2 root uniz-bphys 512 Nov 8 15:18 uniz-bphys
drwxrwxr-x 2 root uniz-higgs 512 Nov 8 15:18 uniz-higgs
drwxrwxr-x 2 root uniz-pixel 512 Nov 8 15:18 uniz-pixel
Basic setup
We offer the following user interfaces ( UIs ) assigned according to the following rules; generally speaking the very old
SL5 UIs will be stopped during 2014/2015 so your efforts should be addressed to the
SL6 UIs.
Access to Login nodes is based on the institution
The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.
UI Login node |
for institution |
HW specs |
t3ui01.psi.ch |
ETHZ, PSI |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui02.psi.ch |
All |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui03.psi.ch |
UNIZ |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
- Try to log into a
t3ui*
machine ; -Y
or -X
flag for working with X applications; you can also try to connect by NX client, which allows to work efficiently with graphical applications
ssh -Y username@t3ui02.psi.ch
- If you are an external user and you don't have a standard PSI account, you'll have to change your initial password the first time you log in; use the standard
passwd
utility.
- Copy your grid credentials to the standard places, i.e. to
~/.globus/userkey.pem
and ~/.globus/usercert.pem
and make sure that both files permissions are set correctly:
-rw-r--r-- 1 feichtinger cms 2961 Mar 17 2008 usercert.pem
-r-------- 1 feichtinger cms 1917 Mar 17 2008 userkey.pem
For details about how to extract those files from your CERN User Grid-Certificate please read https://gridca.cern.ch/gridca/Help/?kbid=024010.
- source the grid environment associated to your login shell:
source /swshare/psit3/etc/profile.d/cms_ui_env.sh # for bash
source /swshare/psit3/etc/profile.d/cms_ui_env.csh # for tcsh
- You have to complete the CMS "Virtual Organization" subscription or the following command
voms-proxy-init -voms cms
won't work. CERN details about that, e.g. who is your representative.
- Try to create a proxy certificate for CMS
voms-proxy-init -voms cms
If this fails, run the command with an additional -debug
flag, and the error message will usually be sufficient for us to point out the problem.
- Test your access to the PSI Storage element with the
test-dCacheProtocols
command. You should see output like this (no failed tests):
$ test-dCacheProtocols -n t3se01.psi.ch -p /pnfs/psi.ch/cms/testing -i "GSIDCAP-write SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
Test directory: /tmp/dcachetest-20131118-1202-1970
TEST: GSIDCAP-write ...... [IGNORE]
TEST: SRMv1-adv-del ...... [IGNORE]
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: SRMv1-adv-del1 ...... [IGNORE]
TEST: SRMv1-write ...... [IGNORE]
TEST: SRMv1-get-meta ...... [IGNORE]
TEST: SRMv1-read ...... [IGNORE]
TEST: SRMv1-adv-del2 ...... [IGNORE]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
- Be aware of the CSCS CMS User Page
- The
test-dCacheProtocols
tool can also be used to test a remote element access (use the -h
flag to get more info about it): e.g. to test CSCS:
$ test-dCacheProtocols -n storage01.lcg.cscs.ch -p /pnfs/lcg.cscs.ch/cms/testing -i "GSIDCAP-write DCAP-read SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
Test directory: /tmp/dcachetest-20131118-1201-1784
TEST: GSIDCAP-write ...... [IGNORE]
TEST: SRMv1-adv-del ...... [IGNORE]
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [IGNORE]
TEST: SRMv1-adv-del1 ...... [IGNORE]
TEST: SRMv1-write ...... [IGNORE]
TEST: SRMv1-get-meta ...... [IGNORE]
TEST: SRMv1-read ...... [IGNORE]
TEST: SRMv1-adv-del2 ...... [IGNORE]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
Changing your account details
It's possible to get changed your login shell, e.g. from
bash
to
tcsh
, your group, also your account name ; often users requested to change their
Grid cert subject:
, e.g. because they were moving from a country to an other where they got a new certificate.
AFS CERN Ticket
You should use the following sequence of commands at T3:
kinit ${Your_CERN_Username}@CERN.CH
aklog cern.ch
The first command gets you a kerberos ticket, the second command uses that ticket to obtain an authentication token from CERN's AFS service