How to access, set up, and test your account
Mailing lists and communication with admins and other users
-
cms-tier3-users@lists.psi.ch
: list through which we broadcast information (e.g about downtimes). It can also be used for discussions among users (e.g. getting help from other users). You must subscribe to this list using its web interface (list archives).
-
cms-tier3@lists.psi.ch
: Use this list to reach the Tier-3 admins, typically if you have a problem and you need help. What you write to this list is only seen by the administrators
Both lists are read by the administrators and are archived.
First Steps on T3 User Interfaces (UI)
Three identical User Interface servers ( UIs ) are available as login nodes for the Tier-3 cluster. You can test your programs there, submit batch jobs and do some interactive work. But production runs that inflict a heavy load on the system are not permitted, since they willl impact the work of other users. Please run such jobs in the batch queues (one can also run interactively in an allocated batch slot).
Access to Login nodes is based on the institution
The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.
UI Login node |
for institution |
HW specs |
t3ui01.psi.ch |
ETHZ, PSI |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui02.psi.ch |
All |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui03.psi.ch |
UNIZ |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
- Use
ssh
to log in to a UI server t3ui0*
server. You can use -Y
or -X
flag if you want to work with graphical X applications.
ssh -Y YourUsername@t3ui02.psi.ch
- If you are an ETHZ or UniZ user and do not have a regular PSI account, you will have to change your initial password after logging in for the first time. Modify the initial password by using the
passwd
command.
- Check that you can access the Storage Element through the NFS protocol, by just creating and deleting a test file in your user area
touch /pnfs/psi.ch/cms/trivcat/store/user/${USER}/my-first-test
rm /pnfs/psi.ch/cms/trivcat/store/user/${USER}/my-first-test
- Check that you can access the NFS /work area
touch /work/${USER}/my-first-test
rm /work/${USER}/my-first-test
- For setting up the basics for the CMS software environment, make sure that this works for you
source ${VO_CMS_SW_DIR}/cmsset_default.sh
The following tests only apply if you own a
Grid certificate for authentication on the LHC Grid, and if you registered that certificate with us in your account application.
- In order to work with resources on the WLCG grid you need to have a grid x509 certificate and a matching private key. Copy these credentials to the standard locations of
~/.globus/userkey.pem
and ~/.globus/usercert.pem
and make sure that their permissions are properly set. The user key must NEVER be readable by any other user!
chmod 600 userkey.pem
chmod 644 usercert.pem
For details about how to extract those .pem
files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate.
- Make sure that your credentials are registered with the CMS Virtual Organization CERN details about that. Else, the next step will fail.
- Create you short term credentials in the form of a proxy certificate with CMS extensions (valid for 168 hours):
voms-proxy-init -voms cms --valid 168:00
If the command fails you can run it again adding a -debug
flag to troubleshoot the problem.
- Test your access to the PSI Storage element using our
test-dCacheProtocols
testing suite
$ test-dCacheProtocols
[feichtinger@t3ui01 ~]$ ./test-dCacheProtocols
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: XROOTD-LAN-write ...... [OK]
TEST: XROOTD-LAN-stat ...... [OK]
TEST: XROOTD-LAN-read ...... [OK]
TEST: XROOTD-LAN-rm ...... [OK]
TEST: XROOTD-WAN-write ...... [OK]
TEST: XROOTD-WAN-read ...... [OK]
TEST: XROOTD-WAN-rm ...... [OK]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
- If a test fails, an error message will be written to the screen, and it will point you to a file containing the details of the error. Please send this together with all the information to cms-tier3@lists.psi.ch.
- TIP: You can use the
-v
(verbose) flag to see the commands that the script executes. This is a good way to learn about the slightly esoteric syntax for interacting with grid storage. If you supply a -d
flag as well, the tests will not be run, but you will be able to look at all the actions that the script would execute.
- Test write access to your user area on the storage element. The user area is located underneath
/pnfs/psi.ch/cms/trivcat/store/user
and has by convention your cms hypernews name name as directory name. However, due to historic procedures, it might also be that your Tier-3 login name is used for this directory /pnfs/psi.ch/cms/trivcat/store/user/${your_cms_name}. E.g.
test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/feichtinger
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: XROOTD-LAN-write ...... [OK]
TEST: XROOTD-LAN-stat ...... [OK]
TEST: XROOTD-LAN-read ...... [OK]
TEST: XROOTD-LAN-rm ...... [OK]
TEST: XROOTD-WAN-write ...... [OK]
TEST: XROOTD-WAN-read ...... [OK]
TEST: XROOTD-WAN-rm ...... [OK]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
You may want to read about the
Tier3Storage next!
T3 policies
Please read and take note of our
Policies
Linux groups (partially OBSOLETE - needs to be revised)
Each T3 user belongs to both a primary group and a common secondary group
cms, the former is meant to classify common files like the ones downloaded by the
PhEDEx file transfer service. T3 primary groups are :
ETHZ |
UniZ |
PSI |
ethz-ecal |
uniz-higgs |
psi-bphys |
ethz-bphys |
uniz-pixel |
psi-pixel |
ethz-ewk |
uniz-bphys |
|
ethz-higgs |
|
|
ethz-susy |
|
|
For instance this is the
primary and the
secondary group of a generic T3 account :
$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
The T3 groups areas:
/pnfs/psi.ch/cms/trivcat/store/t3groups
Optional Initial Setups (partially obsolete)
local Anaconda/Conda installation
One might do following steps to add anaconda:
cd /work/${USER}/
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh -b -p ./miniconda3
rm Miniconda3-latest-Linux-x86_64.sh
- Every time when using this conda environment:
export PATH=${PWD}/miniconda3/bin:${PATH}
or
export PATH=/work/${USER}/miniconda3/bin:${PATH}
Installing the CERN CA files into your Web Browser
Install in your Web Browser any
CERN CA file, conversely your Web Browser might constantly bother you about all the CERN
https://
URLs ; typically the Web Browsers feature many well known
CA files by default but not the CERN CA files.
Applying for the VOMS Group /cms/chcms
membership
It's available a dedicated 'Swiss' VOMS Group called
/cms/chcms
in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely :
- higher priority on the T2_CH_CSCS batch queues
- additional Jobs slots on the T2_CH_CSCS batch queues
- additional
/pnfs
space inside the T2_CH_CSCS grid storage
- during 2017, a group area like the T3 groups areas
/pnfs/psi.ch/cms/trivcat/store/t3groups/
When a user belongs to the
/cms/chcms
group, and he runs
voms-proxy-init --voms cms
, the
voms-proxy-info --all
will report the new
/cms/chcms/Role=NULL/Capability=NULL attribute, like :
$ voms-proxy-info --all | grep /cms
attribute : /cms/Role=NULL/Capability=NULL
attribute : /cms/chcms/Role=NULL/Capability=NULL
To apply for the
/cms/chcms
membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on
https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the
/cms/chcms
membership ; be aware that the port
:8443
might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )