How to access, set up, and test your account
Preliminary steps
All the documentation is maintained in the T3 twiki pages:
https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Please take a look at it to explore T3's capabilities.
Information about T3 mailing-lists:
- Please subscribe to the cms-tier3-users list using this web interface (list archives): cms-tier3-users@lists.psi.ch. This mailing-list is used to communicate information on Tier-3 matters (downtimes, etc.) or for discussions among users and admins,
- To contact the CMS Tier-3 administrators alone, write to: cms-tier3@lists.psi.ch
- NOTE: Both lists are read by the administrators and are archived. You can submit support requests to either of them. Mails to the user list have the added advantage that they can be read by everyone.
Please also take a look at our policies about usage, quotas etc.. here:
Tier3Policies .
T3 Linux Groups
The T3 users are partitioned in the following Linux groups; the former primary group
cms became now a secondary group to whom each user belongs to.
-
ethz-ecal
| ethz-bphys
| ethz-ewk
| ethz-higgs
| ethz-susy
-
psi-bphys
| psi-pixel
| uniz-phys
-
uniz-higgs
| uniz-pixel
The groups partitioning:
- allows a faster and simpler understanding of the
/pnfs
space usage by leveraging on the gid
setting: in the past having all the files assigned to the unique group cms
made this kind of accounting very tricky in dCache.
- allows to make accounting in the batch system also according to the
gid
setting.
- allowed the creation of dedicated "group dirs" in
/pnfs
where just the group members can operate.
- as a side effect, each new file written in
/shome
, /pnfs
, /scratch
or /tmp
has a specific gid
, as explained different from cms, and this specific gid
prevents the accidental deletion of files by other groups.
Following a user with his two groups:
$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
Following an output that points out how the
/pnfs
dirs are assigned according to the groups partitioning:
$ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms
total 56
drwxr-xr-x 2 alschmid uniz-bphys 512 Feb 21 2013 alschmid
drwxr-xr-x 5 amarini ethz-ewk 512 Nov 7 15:37 amarini
drwxr-xr-x 18 andis ethz-bphys 512 Jan 5 2010 andis
drwxr-xr-x 2 arizzi ethz-bphys 512 Sep 16 17:49 arizzi
drwxr-xr-x 5 bean psi-bphys 512 Aug 24 2010 bean
drwxr-xr-x 5 bianchi ethz-higgs 512 Sep 9 09:40 bianchi
drwxr-xr-x 30 bmillanm uniz-bphys 512 Jan 17 2012 bmillanm
drwxr-xr-x 29 bortigno ethz-higgs 512 Apr 18 2013 bortigno
drwxr-xr-x 98 buchmann ethz-susy 512 Nov 5 20:36 buchmann
...
This last output shows the "group dirs" mentioned before:
$ ls -l /pnfs/psi.ch/cms/trivcat/store/t3groups/
total 5
drwxrwxr-x 2 root ethz-bphys 512 Nov 8 15:18 ethz-bphys
drwxrwxr-x 2 root ethz-ecal 512 Nov 8 15:18 ethz-ecal
drwxrwxr-x 2 root ethz-ewk 512 Nov 8 15:18 ethz-ewk
drwxrwxr-x 2 root ethz-higgs 512 Nov 8 15:18 ethz-higgs
drwxrwxr-x 2 root ethz-susy 512 Nov 8 15:18 ethz-susy
drwxrwxr-x 2 root psi-bphys 512 Nov 8 15:18 psi-bphys
drwxrwxr-x 2 root psi-pixel 512 Nov 8 15:18 psi-pixel
drwxrwxr-x 2 root uniz-bphys 512 Nov 8 15:18 uniz-bphys
drwxrwxr-x 2 root uniz-higgs 512 Nov 8 15:18 uniz-higgs
drwxrwxr-x 2 root uniz-pixel 512 Nov 8 15:18 uniz-pixel
If your account hasn't a Linux group assigned please write us.
Basic setup
We offer the following user interfaces. Please note that there is a convention on which users should access which UIs.
Access to Login nodes is based on the institution
The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.
UI Login node |
for institution |
HW specs |
t3ui03.psi.ch |
UNIZ |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui01.psi.ch |
ETHZ, PSI |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui02.psi.ch |
All |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
- Try to log into a
t3ui*
machine (-Y
or -X
flag for working with X applications. You can also try to connect by NX client, which allows to work efficiently with graphical applications)
ssh -Y username@t3ui02.psi.ch
- If you are an external user and you don't have a standard PSI account, you need to change your password the first time you log in; use the standard
passwd
utility.
- Copy your grid credentials to the standard places, i.e. to
~/.globus/userkey.pem
and ~/.globus/usercert.pem
and make sure that the permissions are set correctly like in this output:
-rw-r--r-- 1 feichtinger cms 2961 Mar 17 2008 usercert.pem
-rw------- 1 feichtinger cms 1917 Mar 17 2008 userkey.pem
For details about how to extract those files from your CERN User Grid-Certificate please read https://gridca.cern.ch/gridca/Help/?kbid=024010.
- source a grid environment
source /swshare/psit3/etc/profile.d/cms_ui_env.sh # for bash
source /swshare/psit3/etc/profile.d/cms_ui_env.csh # for tcsh
- Try to create a proxy certificate for CMS
voms-proxy-init -voms cms
If this fails, run the command with an additional -debug
flag, and the error message will usually be sufficient for us to point out the problem. Make sure that your certificate is signed up with CMS as in these instructions from our Tier-2 wiki.
- Test your access to the PSI Storage element with the
test-dCacheProtocols
command. You should see output like this (no failed tests):
$ test-dCacheProtocols -n t3se01.psi.ch -p /pnfs/psi.ch/cms/testing -i "GSIDCAP-write SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
Test directory: /tmp/dcachetest-20131118-1202-1970
TEST: GSIDCAP-write ...... [IGNORE]
TEST: SRMv1-adv-del ...... [IGNORE]
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: SRMv1-adv-del1 ...... [IGNORE]
TEST: SRMv1-write ...... [IGNORE]
TEST: SRMv1-get-meta ...... [IGNORE]
TEST: SRMv1-read ...... [IGNORE]
TEST: SRMv1-adv-del2 ...... [IGNORE]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
- Note: The
test-dCacheProtocols
tool can also be used to test a remote element access (use the -h
flag to get more info about it): e.g. to test CSCS:
$ test-dCacheProtocols -n storage01.lcg.cscs.ch -p /pnfs/lcg.cscs.ch/cms/testing -i "GSIDCAP-write DCAP-read SRMv1-adv-del SRMv1-adv-del1 SRMv1-write SRMv1-get-meta SRMv1-read SRMv1-adv-del2"
Test directory: /tmp/dcachetest-20131118-1201-1784
TEST: GSIDCAP-write ...... [IGNORE]
TEST: SRMv1-adv-del ...... [IGNORE]
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [IGNORE]
TEST: SRMv1-adv-del1 ...... [IGNORE]
TEST: SRMv1-write ...... [IGNORE]
TEST: SRMv1-get-meta ...... [IGNORE]
TEST: SRMv1-read ...... [IGNORE]
TEST: SRMv1-adv-del2 ...... [IGNORE]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
Changing your login shell
If later you need to change your default login shell then please ask that to the t3 admin list.
AFS CERN Ticket
You should use the following sequence of commands at T3:
kinit ${Your_CERN_Username}@CERN.CH
aklog cern.ch
The first command gets you a kerberos ticket, the second command uses that ticket to obtain an authentication token from CERN's AFS service