Tags:
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ !!How to access, set up, and test your account %TOC% ---++ Compulsory Initial Setups All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Please bookmark it and explore the T3's capabilities. Information about the two T3 mailing-lists: * Subscribe to the =cms-tier3-users@lists.psi.ch= mailing list using [[https://psilists.ethz.ch/sympa/info/cms-tier3-users][its web interface]] ([[https://psilists.ethz.ch/sympa/arc/cms-tier3-users][list archives]]): this mailing list is used to communicate information on Tier-3 matters (downtimes, outages, news, upgrades, etc.) or for discussions among users and admins. * To privately contact the CMS Tier-3 administrators write to =cms-tier3@lists.psi.ch= instead ; no subscription is needed for this mailing list. * Both lists are read by the administrators and are archived. You can submit support requests to either of them but emails addressed to the =cms-tier3-users@lists.psi.ch= are read by *everyone* so they could get answered better and sooner, especially if you ask about specific CMS software ( CRAB3, CMSSW, Xrootd, ... ) ---+++ T3 policies Read and respect the Tier3Policies ---+++ Linux groups Each T3 user belongs to both a primary group and a common secondary group %GREEN%cms%ENDCOLOR%, the former is meant to classify common files like the ones downloaded by the [[https://cmsweb.cern.ch/phedex/][PhEDEx]] file transfer service ; the T3 primary groups are : | *ETHZ* | *UniZ* | *PSI* | | =%ORANGE%ethz-ecal%ENDCOLOR%= | =uniz-higgs= | =%TEAL%psi-bphys%ENDCOLOR%= | | =%BROWN%ethz-bphys%ENDCOLOR%= | =%BLUE%uniz-pixel%ENDCOLOR%= | =%PURPLE%psi-pixel%ENDCOLOR%= | | =%RED%ethz-ewk%ENDCOLOR%= | =%ORANGE%uniz-bphys%ENDCOLOR%= | | | =%BLUE%ethz-higgs%ENDCOLOR%= | | | | =%PURPLE%ethz-susy%ENDCOLOR%= | | | The T3 primary groups allow : * an easier =/pnfs /shome /scratch /tmp= space monitoring * an easier batch system monitoring * to protect the =/pnfs= group files because only the group members will be able to delete those files ; very often that's *NOT* guaranteed by the others CMS T1/T2/T3 ! * to manage the groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= ; in a such dir each new file belongs to =root= and to a specific T3 group, it doesn't matter which specific T3 user uploads the files ( but clearly he/she's a group member ) For instance this is the %BLUE%primary%ENDCOLOR% and the %GREEN%secondary%ENDCOLOR% group of a generic T3 account : <pre> $ id auser uid=571(auser) gid=532(%BLUE%ethz-higgs%ENDCOLOR%) groups=532(%BLUE%ethz-higgs%ENDCOLOR%),500(%GREEN%cms%ENDCOLOR%) </pre> The following output is a fragment of the private user dirs =/pnfs/psi.ch/cms/trivcat/store/user/= : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms total 56 drwxr-xr-x 2 alschmid %ORANGE%uniz-bphys%ENDCOLOR% 512 Feb 21 2013 alschmid drwxr-xr-x 5 amarini %RED%ethz-ewk%ENDCOLOR% 512 Nov 7 15:37 amarini drwxr-xr-x 2 arizzi %BROWN%ethz-bphys%ENDCOLOR% 512 Sep 16 17:49 arizzi drwxr-xr-x 5 bean %TEAL%psi-bphys%ENDCOLOR% 512 Aug 24 2010 bean drwxr-xr-x 5 bianchi %BLUE%ethz-higgs%ENDCOLOR% 512 Sep 9 09:40 bianchi drwxr-xr-x 98 buchmann %PURPLE%ethz-susy%ENDCOLOR% 512 Nov 5 20:36 buchmann ... </pre> The T3 groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/t3groups/ total 5 drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-bphys 512 Nov 8 15:18 %BROWN%ethz-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-ecal 512 Nov 8 15:18 %ORANGE%ethz-ecal%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-ewk 512 Nov 8 15:18 %RED%ethz-ewk%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-higgs 512 Nov 8 15:18 %BLUE%ethz-higgs%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-susy 512 Nov 8 15:18 %PURPLE%ethz-susy%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root psi-bphys 512 Nov 8 15:18 %TEAL%psi-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root psi-pixel 512 Nov 8 15:18 %PURPLE%psi-pixel%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-bphys 512 Nov 8 15:18 %ORANGE%uniz-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-higgs 512 Nov 8 15:18 uniz-higgs drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-pixel 512 Nov 8 15:18 %BLUE%uniz-pixel%ENDCOLOR% </pre> ---+++ User Interfaces ( UI ) Three identical User Interfaces ( UIs ) servers are available to their specific users to both develop their programs and to send their computational jobs to the T3 batch system by the =qsub= command : %INCLUDE{"Tier3Policies" section="UisPerGroup"}% 1. Login into your =t3ui0*= server by =ssh= ; use =-Y= or =-X= flag for working with X applications; you might also try to [[UserNxClientInstallation][connect by NX client]], which allows to work efficiently with your graphical applications<pre> ssh -Y username@t3ui02.psi.ch </pre> 1. *If you are an external PSI user ( ETHZ, UniZ, ... ) modify the initial password sent you the first time you'll connect to your UI*; use the standard =passwd= tool. 1. Copy your grid credentials to their standard places, i.e. to =~/.globus/userkey.pem= and =~/.globus/usercert.pem= and make sure that their permissions are properly set, like : <pre> -rw-r--r-- 1 feichtinger cms 2961 Mar 17 2008 usercert.pem -r-------- 1 feichtinger cms 1917 Mar 17 2008 userkey.pem </pre> For details about how to extract those =.pem= files from your CERN User Grid-Certificate ( usually a password protected .p12 file )please read [[https://gridca.cern.ch/gridca/Help/?kbid=024010]]. 1. Source the grid environment associated to your login shell: <pre> source /swshare/psit3/etc/profile.d/cms_ui_env.sh # for bash source /swshare/psit3/etc/profile.d/cms_ui_env.csh # for tcsh </pre> 1. ( Optional ) Modify your shell init files in order to automatically load the grid environment ; for BASH that means placing :<pre> [ `echo $HOSTNAME | grep t3ui` ] && [ -r /swshare/psit3/etc/profile.d/cms_ui_env.sh ] && source /swshare/psit3/etc/profile.d/cms_ui_env.sh && echo "UI features enabled" </pre> into your =~/.bash_profile= file. 1. Run =env|sort= and verify that =/swshare/psit3/etc/profile.d/cms_ui_env.{sh,csh}= has properly activated the setting <pre>X509_USER_PROXY=/shome/$(id -un)/.x509up_u$(id -u)"</pre> ; that setting is *crucial* to access a CMS Grid SE from your T3 jobs. 1. You must register to the [[https://lcg-voms.cern.ch:8443/vo/cms/vomrs?path=/RootNode][CMS "Virtual Organization"]] service or the following command =voms-proxy-init -voms cms= won't work. [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideLcgAccess#How_to_register_in_the_CMS_VO][CERN details about that]], e.g. who is your representative. 1. Create a proxy certificate for CMS by: <pre> voms-proxy-init -voms cms </pre> If the command =voms-proxy-init -voms cms= will fail then run the command with an additional =-debug= flag, the error message will be usually sufficient for the T3 Admins to troubleshoot the problem.</br> 1. Test your access to the PSI Storage element by the =test-dCacheProtocols= command ; you should get an output like this (possibly without any failed test) ; sometime the XROOTD-WAN-* tests might get stuck due to the I/O traffic coming from Internet but as a local T3 user you're actually supposed to use the XROOTD-LAN I/O door that is unaccessible from Internet, so you might skip the XROOTD-WAN-* tests by either simply pressing Ctrl-C or by passing the option : %ORANGE%-i "XROOTD-LAN-write"%ENDCOLOR% ( see below )<pre> $ test-dCacheProtocols Test directory: /tmp/dcachetest-20150529-1449-14476 TEST: GFTP-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs gsiftp://t3se01.psi.ch:2811/ TEST: GFTP-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-read ...... [%GREEN%OK%ENDCOLOR%] TEST: DCAP-read ...... [%GREEN%OK%ENDCOLOR%] <-- vs dcap://t3se01.psi.ch:22125/ TEST: SRMv2-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs srm://t3se01.psi.ch:8443/ TEST: SRMv2-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-read ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs root://t3dcachedb03.psi.ch:1094/ <-- Use this if you run LOCAL jobs at T3 and you need root:// access to the T3 files TEST: XROOTD-LAN-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-read ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs root://t3se01.psi.ch:1094/ <-- Use this if you run REMOTE jobs and you need root:// access to the T3 files ; e.g. you're working on lxplus TEST: XROOTD-WAN-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-read ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-rm ...... [%GREEN%OK%ENDCOLOR%] </pre> 1. The =test-dCacheProtocols= tool can be also addressed vs a *remote* storage element (use the =-h= flag to get more info about it) but it's important to avoid all the LAN tests ( -i "%ORANGE%DCAP-read XROOTD-LAN-write XROOTD-WAN-write%ENDCOLOR%" ) ; e.g. to check the CSCS storage element =storage01.lcg.cscs.ch= : <pre> $ test-dCacheProtocols -s storage01.lcg.cscs.ch -x storage01.lcg.cscs.ch -p /pnfs/lcg.cscs.ch/cms/trivcat/store/user/martinel -i "%ORANGE%DCAP-read XROOTD-LAN-write XROOTD-WAN-write%ENDCOLOR%" Test directory: /tmp/dcachetest-20150529-1545-16302 TEST: GFTP-write ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-read ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%DCAP-read%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: SRMv2-write ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-read ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%XROOTD-LAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-LAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: %ORANGE%XROOTD-WAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-WAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) </pre> ---+++ Backup policies Your =/shome= files are backuped : * each hour, max 36h * each day, max 10 days recovering a file is as simple as running a =cp= command from your UI ; further details are here HowToRetrieveBackupFiles. There are NO backups all the =/tmp /scratch /pnfs= files instead, so pay attention there ! ---++ Optional Initial Setups ---+++ Installing the CERN CA files into your Web Browser Install in your Web Browser any [[https://cafiles.cern.ch/cafiles/][CERN CA file]], conversely your Web Browser might constantly bother you about all the CERN =https://= URLs ; typically the Web Browsers feature many well known [[https://en.wikipedia.org/wiki/Certificate_authority][CA files]] by default but not the CERN CA files. ---+++ Applying for the VOMS Group =/cms/chcms= membership It's available a dedicated 'Swiss' VOMS Group called =/cms/chcms= in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely : * higher priority on the T2_CH_CSCS batch queues * additional Jobs slots on the T2_CH_CSCS batch queues * additional =/pnfs= space inside the T2_CH_CSCS grid storage * during 2017, a group area like the T3 groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= When a user belongs to the =/cms/chcms= group, and he runs =voms-proxy-init --voms cms=, the =voms-proxy-info --all= will report the new %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% attribute, like : <pre> $ voms-proxy-info --all | grep /cms attribute : /cms/Role=NULL/Capability=NULL attribute : %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% </pre> To apply for the =/cms/chcms= membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the =/cms/chcms= membership ; be aware that the port =:8443= might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home ) ---+++ Saving the UIs SSH pub host keys Hackers are constantly waiting for a user mistake, even a simple misspelled %RED%letter%ENDCOLOR% like in this case occurred in 2015 : <pre> $ ssh t3ui02.psi.%RED%s%ENDCOLOR%h The authenticity of host 't3ui02.psi.%RED%s%ENDCOLOR%h (62.210.217.195)' can't be established. RSA key fingerprint is c0:c5:af:36:4b:2d:1f:88:0d:f3:9c:08:cc:87:df:42. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't3ui02.psi.%RED%s%ENDCOLOR%h,62.210.217.195' (RSA) to the list of known hosts. at3user@t3ui02.psi.%RED%s%ENDCOLOR%h's %RED%password%ENDCOLOR%: </pre> %RED%The T3 Admins can't prevent a T3 user from confusing a =.ch= with a =.sh= so pay attention to these cases ! %ENDCOLOR% To avoid mistaking the T3 hostnames you can define the following aliases in your shell : </br> <pre> $ grep alias ~/.bash_profile | grep t3ui alias ui01="ssh -X $USER@t3ui01.psi.ch" alias ui02="ssh -X $USER@t3ui02.psi.ch" alias ui03="ssh -X $USER@t3ui03.psi.ch" </pre> </br> Another hackers attack is the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attack]] ; to prevent it proactively save in =/$HOME/.ssh/known_hosts= each =t3ui0*= SSH RSA public key by running these commands on each of your daily laptop/PC/server ( also on =lxplus= ! ) : <pre> cp -p /$HOME/.ssh/known_hosts /$HOME/.ssh/known_hosts.`date +"%d-%m-%Y"` mkdir /tmp/t3ssh/ for X in 01 02 03 ; do TMPFILE=`mktemp /tmp/t3ssh/XXXXXX` && ssh-keyscan -t rsa t3ui$X.psi.ch,t3ui$X,`host t3ui$X.psi.ch| awk '{ print $4}'` | cat - /$HOME/.ssh/known_hosts | grep -v 'psi\.sh' > $TMPFILE && mv $TMPFILE /$HOME/.ssh/known_hosts ; done rm -rf /tmp/t3ssh for X in 01 02 03 ; do echo -n "# entries for t3ui$X = " ; grep -c t3ui$X /$HOME/.ssh/known_hosts ; grep -Hn --color t3ui$X /$HOME/.ssh/known_hosts ; echo ; done echo done </pre> the last =for= reports if there are duplicated rows in =/$HOME/.ssh/known_hosts= for a =t3ui0*= server ; and if there are then you've to preserve the correct occurrence and delete the others ; to delete you can either use =sed -i= or an editor like =vim= / =emacs= / =nano= / =nedit= ; once you'll get just one row per =t3ui0*= server run this command and carefully compare your output with this output: </br> <pre> $ ssh-keygen -l -f /$HOME/.ssh/known_hosts | grep t3ui 2048 SHA256:0Z8Su5R4aZthbePGMM14mEKxYFOuKyrnUe9GjU0m6vM t3ui01.psi.ch,192.33.123.23 (RSA) 2048 SHA256:2qA9YDNeOEbGYjIdpRdBJpywQDne5gRbRvN/myL5P8o t3ui02.psi.ch,192.33.123.29 (RSA) 2048 SHA256:SoIL0H0ueyASNkyYID3a16AIHuAEP7AQ5iaQ6vrvzfk t3ui03.psi.ch,192.33.123.85 (RSA) </pre></br> modify your client =/$HOME/.ssh/config= in order to force the ssh command to *always* check if the server you're connecting to is already reported in the =/$HOME/.ssh/known_hosts= file and to ask for your 'OK' for all the servers that are missing : <pre> StrictHostKeychecking %GREEN%ask%ENDCOLOR% </pre> your =/$HOME/.ssh/config= can be more complex than just that line, study the [[http://linux.die.net/man/5/ssh_config][ssh_config man page]] or contact the T3 Admins; ideally you should put =StrictHostKeychecking %GREEN%yes%ENDCOLOR%= but in real life that's impractical. now your ssh client will be able to detect the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attacks]] and if so it will report : <pre> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. </pre> The =t3ui0*= SSH RSA public/private keys will *never* change, so the case =It is also possible that the RSA host key has just been changed= will actually *never* occurs. ---+++ Creating an AFS CERN Ticket To access the CERN =/afs= protected dirs ( e.g. your CERN home on AFS ) you'll need to create a ticket from CERN AFS : <pre> kinit ${Your_CERN_Username}@CERN.CH aklog cern.ch </pre> The first command will provide you a Kerberos ticket while the second command will use the Kerberos ticket to obtain an authentication token from CERN's AFS service ---+++ The T3 Admins Skype Accounts The Skype accounts are no longer the suggested way of contacting the T3 admins. ---+++ Web browsing your =/shome= files on demand We don't provide a =http{s}://= URL to browse your =/shome= logs/errors/programs because there was always a modest interest about a such web portal but you can turn on a private website rooted on an arbitrary %BLUE%dir%ENDCOLOR% of yours by simply using SSH + Python like in the following example ( replace =t3ui02= with your daily =t3ui= server and the %BLUE%dir%ENDCOLOR% with a dir meaningful for your case, for instance %BLUE%~%ENDCOLOR% ): <pre> ssh -L 8000:t3ui02.psi.ch:8000 t3user@t3ui02.psi.ch "killall python ; cd %BLUE%/mnt/t3nfs01/data01/shome/ytakahas/work/TauTau/SFrameAnalysis/Scripts/%ENDCOLOR% && python -m SimpleHTTPServer" </pre> open your Web browser to the page http://localhost:8000/ . That's it. the preliminary =killall python;= command is meant to kill a previous =python -m SimpleHTTPServer= run that might be still active but if you've other =python= programs %RED%they will be also killed%ENDCOLOR% ; in that case delete the initial =killall python;= command and kill a previous =python -m SimpleHTTPServer= command by : <pre> t3ui02 $ kill -9 `pgrep -f "^python -m SimpleHTTPServer*"` </pre> if some other T3 user is already using the =t3ui02.psi.ch:8000= TCP port then use another port like %GREEN%8001%ENDCOLOR%, %GREEN%8002%ENDCOLOR%, etc.. : <pre> ssh -L 8000:t3ui02.psi.ch:%GREEN%8001%ENDCOLOR% t3user@t3ui02.psi.ch "killall python ; cd %BLUE%/mnt/t3nfs01/data01/shome/ytakahas/work/TauTau/SFrameAnalysis/Scripts/%ENDCOLOR% && python -m SimpleHTTPServer %GREEN%8001%ENDCOLOR%" </pre>
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r80
|
r61
<
r60
<
r59
<
r58
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r59 - 2017-03-14
-
FabioMartinelli
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback