How to access, set up, and test your account
Mailing lists and communication with admins and other users
-
cms-tier3-users@lists.psi.ch
: list through which we broadcast information (e.g about downtimes). It can also be used for discussions among users (e.g. getting help from other users). You must subscribe to this list using its web interface (list archives).
-
cms-tier3@lists.psi.ch
: Use this list to reach the Tier-3 admins, typically if you have a problem and you need help. What you write to this list is only seen by the administrators
Both lists are read by the administrators and are archived.
T3 policies
Please read and take note of our
Policies
Linux groups (partially OBSOLETE - needs to be revised)
Each T3 user belongs to both a primary group and a common secondary group
cms, the former is meant to classify common files like the ones downloaded by the
PhEDEx file transfer service. T3 primary groups are :
ETHZ |
UniZ |
PSI |
ethz-ecal |
uniz-higgs |
psi-bphys |
ethz-bphys |
uniz-pixel |
psi-pixel |
ethz-ewk |
uniz-bphys |
|
ethz-higgs |
|
|
ethz-susy |
|
|
For instance this is the
primary and the
secondary group of a generic T3 account :
$ id auser
uid=571(auser) gid=532(ethz-higgs) groups=532(ethz-higgs),500(cms)
The T3 groups areas:
/pnfs/psi.ch/cms/trivcat/store/t3groups
First Steps on T3 User Interfaces (UI)
Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission:
Access to Login nodes is based on the institution
The access is not restricted to allow for some freedom, but you are requested to use the UI dedicated to your institution.
UI Login node |
for institution |
HW specs |
t3ui01.psi.ch |
ETHZ, PSI |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui02.psi.ch |
All |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
t3ui03.psi.ch |
UNIZ |
132GB RAM , 72 CPUs core (HT), 5TB /scratch |
- Login into your
t3ui0*
server by ssh
; use -Y
or -X
flag for working with X applications: ssh -Y username@t3ui02.psi.ch
- If you are an external PSI user ( ETHZ, UniZ, ... ) modify the initial password ASAP from your UI with
passwd
command.
- Copy your grid credentials to
~/.globus/userkey.pem
and ~/.globus/usercert.pem
and make sure that their permissions are properly set like :
chmod 400 userkey.pem
chmod 400 usercert.pem
For details about how to extract those .pem
files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate.
- grid environment scripts are automatically loaded when you login to UI/WN nodes. You might add personal chages/setup into your
~/.bash_profile
file.
- You must be registered to CMS "Virtual Organization" CERN details about that.
- Create a proxy certificate (that lasts 24 hours) for CMS by:
voms-proxy-init -voms cms
and
voms-proxy-init --voms cms --valid 168:00
for 168 hours.
If the command
voms-proxy-init -voms cms
fails then run the command with
-debug
flag to troubleshoot the problem.
- do some basic setup of CMSSW:
export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch/
source ${VO_CMS_SW_DIR}/cmsset_default.sh
- Test your access to the PSI Storage element using our
test-dCacheProtocols
testing suite
$ test-dCacheProtocols
[feichtinger@t3ui01 ~]$ ./test-dCacheProtocols
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: XROOTD-LAN-write ...... [OK]
TEST: XROOTD-LAN-stat ...... [OK]
TEST: XROOTD-LAN-read ...... [OK]
TEST: XROOTD-LAN-rm ...... [OK]
TEST: XROOTD-WAN-write ...... [OK]
TEST: XROOTD-WAN-read ...... [OK]
TEST: XROOTD-WAN-rm ...... [OK]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
- If a test fails, an error message will be written to the screen, and it will point you to a file containing the details of the error. Please send this together with all the information to cms-tier3@lists.psi.ch.
- TIP: You can use the
-v
(verbose) flag to see the commands that the script executes. This is a good way to learn about the slightly esoteric syntax for interacting with grid storage. If you supply a -d
flag as well, the tests will not be run, but you will be able to look at all the actions that the script would execute.
- Test write access to your user area on the storage element. The user area is located underneath
/pnfs/psi.ch/cms/trivcat/store/user
and has by convention your cms hypernews name name as directory name. However, due to historic procedures, it might also be that your Tier-3 login name is used for this directory /pnfs/psi.ch/cms/trivcat/store/user/${your_cms_name}. E.g.
test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/feichtinger
TEST: GFTP-write ...... [OK]
TEST: GFTP-ls ...... [OK]
TEST: GFTP-read ...... [OK]
TEST: DCAP-read ...... [OK]
TEST: XROOTD-LAN-write ...... [OK]
TEST: XROOTD-LAN-stat ...... [OK]
TEST: XROOTD-LAN-read ...... [OK]
TEST: XROOTD-LAN-rm ...... [OK]
TEST: XROOTD-WAN-write ...... [OK]
TEST: XROOTD-WAN-read ...... [OK]
TEST: XROOTD-WAN-rm ...... [OK]
TEST: SRMv2-write ...... [OK]
TEST: SRMv2-ls ...... [OK]
TEST: SRMv2-read ...... [OK]
TEST: SRMv2-rm ...... [OK]
Backup policies
-
/t3home
: Files are back-upped daily and are available through snapshots
-
/work
: Files are not back-upped. But they reside on a high quality storage with some redundancy, and snapshots are available.
- Storage element (=/pnfs): Files are not backed up. But they reside on a high quality storage with some reduncancy.
Recovering files from snapshots for
/t3home
and
/work
is lined out
in this article.
Attention: there are NO backups of
/scratch, /pnfs
Optional Initial Setups
local Anaconda/Conda installation
One might do following steps to add anaconda:
cd /work/${USER}/
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
sh Miniconda3-latest-Linux-x86_64.sh -b -p ./miniconda3
rm Miniconda3-latest-Linux-x86_64.sh
- Every time when using this conda environment:
export PATH=${PWD}/miniconda3/bin:${PATH}
or
export PATH=/work/${USER}/miniconda3/bin:${PATH}
Installing the CERN CA files into your Web Browser
Install in your Web Browser any
CERN CA file, conversely your Web Browser might constantly bother you about all the CERN
https://
URLs ; typically the Web Browsers feature many well known
CA files by default but not the CERN CA files.
Applying for the VOMS Group /cms/chcms
membership
It's available a dedicated 'Swiss' VOMS Group called
/cms/chcms
in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely :
- higher priority on the T2_CH_CSCS batch queues
- additional Jobs slots on the T2_CH_CSCS batch queues
- additional
/pnfs
space inside the T2_CH_CSCS grid storage
- during 2017, a group area like the T3 groups areas
/pnfs/psi.ch/cms/trivcat/store/t3groups/
When a user belongs to the
/cms/chcms
group, and he runs
voms-proxy-init --voms cms
, the
voms-proxy-info --all
will report the new
/cms/chcms/Role=NULL/Capability=NULL attribute, like :
$ voms-proxy-info --all | grep /cms
attribute : /cms/Role=NULL/Capability=NULL
attribute : /cms/chcms/Role=NULL/Capability=NULL
To apply for the
/cms/chcms
membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on
https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the
/cms/chcms
membership ; be aware that the port
:8443
might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )