<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ !!How to access, set up, and test your account %TOC% ---++ Preliminary steps All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Please bookmark it and explore the T3's capabilities. Information about the two T3 mailing-lists: * Subscribe to the =cms-tier3-users@lists.psi.ch= mailing list using [[https://psilists.ethz.ch/sympa/info/cms-tier3-users][its web interface]] ([[https://psilists.ethz.ch/sympa/arc/cms-tier3-users][list archives]]): this mailing list is used to communicate information on Tier-3 matters (downtimes, outages, news, upgrades, etc.) or for discussions among users and admins. * To privately contact the CMS Tier-3 administrators write to =cms-tier3@lists.psi.ch= instead ; no subscription is needed for this mailing list. * Both lists are read by the administrators and are archived. You can submit support requests to either of them but emails addressed to the =cms-tier3-users@lists.psi.ch= are read by *everyone* so they could get answered better and sooner, especially if you ask about specific CMS software ( CRAB3, CMSSW, Xrootd, ... ) ---++ T3 policies Read and follow the Tier3Policies ---++ Specific primary groups + a common secondary group Each T3 account belongs to a primary group and to the general secondary group %GREEN%cms%ENDCOLOR% that's used for common files like the ones uploaded at T3 by [[https://cmsweb.cern.ch/phedex/][PhEDEx]], the CMS dataset transfer service installed in each T1/T2/T3 ; the primary groups are : | *ETHZ* | *UniZ* | *PSI* | | =%ORANGE%ethz-ecal%ENDCOLOR%= | =uniz-higgs= | =%TEAL%psi-bphys%ENDCOLOR%= | | =%BROWN%ethz-bphys%ENDCOLOR%= | =%BLUE%uniz-pixel%ENDCOLOR%= | =%PURPLE%psi-pixel%ENDCOLOR%= | | =%RED%ethz-ewk%ENDCOLOR%= | =%ORANGE%uniz-bphys%ENDCOLOR%= | | | =%BLUE%ethz-higgs%ENDCOLOR%= | | | | =%PURPLE%ethz-susy%ENDCOLOR%= | | | The primary groups allow : * an intuitive =/pnfs /shome /scratch /tmp= space usage monitoring and batch system accounting * =/pnfs= file and dir locking since only the file owner can delete his/her files ; this protection is usually *NOT* guaranteed by any CMS T1/T2/T3 ! * a safe management of the T3 "group dirs" =/pnfs/psi.ch/cms/trivcat/store/t3groups/= where only the group members can upload/delete files As an example these are the %BLUE%primary%ENDCOLOR% and the %GREEN%secondary%ENDCOLOR% group of a generic T3 account : <pre> $ id auser uid=571(auser) gid=532(%BLUE%ethz-higgs%ENDCOLOR%) groups=532(%BLUE%ethz-higgs%ENDCOLOR%),500(%GREEN%cms%ENDCOLOR%) </pre> The following is an overview of the T3 accounts dirs =/pnfs/psi.ch/cms/trivcat/store/user/= where it's well visible the dirs protection : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms total 56 drwxr-xr-x 2 alschmid %ORANGE%uniz-bphys%ENDCOLOR% 512 Feb 21 2013 alschmid drwxr-xr-x 5 amarini %RED%ethz-ewk%ENDCOLOR% 512 Nov 7 15:37 amarini drwxr-xr-x 2 arizzi %BROWN%ethz-bphys%ENDCOLOR% 512 Sep 16 17:49 arizzi drwxr-xr-x 5 bean %TEAL%psi-bphys%ENDCOLOR% 512 Aug 24 2010 bean drwxr-xr-x 5 bianchi %BLUE%ethz-higgs%ENDCOLOR% 512 Sep 9 09:40 bianchi drwxr-xr-x 98 buchmann %PURPLE%ethz-susy%ENDCOLOR% 512 Nov 5 20:36 buchmann ... </pre> The following are instead the T3 "group dirs" =/pnfs/psi.ch/cms/trivcat/store/t3groups/= and since they're "group dirs" there don't have a specific owner and belong to the =root= account : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/t3groups/ total 5 drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-bphys 512 Nov 8 15:18 %BROWN%ethz-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-ecal 512 Nov 8 15:18 %ORANGE%ethz-ecal%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-ewk 512 Nov 8 15:18 %RED%ethz-ewk%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-higgs 512 Nov 8 15:18 %BLUE%ethz-higgs%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root ethz-susy 512 Nov 8 15:18 %PURPLE%ethz-susy%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root psi-bphys 512 Nov 8 15:18 %TEAL%psi-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root psi-pixel 512 Nov 8 15:18 %PURPLE%psi-pixel%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-bphys 512 Nov 8 15:18 %ORANGE%uniz-bphys%ENDCOLOR% drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-higgs 512 Nov 8 15:18 uniz-higgs drwxr%GREEN%w%ENDCOLOR%xr-x 2 root uniz-pixel 512 Nov 8 15:18 %BLUE%uniz-pixel%ENDCOLOR% </pre> ---++ Your T3 account and the UIs We provide the following SL6 user interfaces ( UIs ) meant to both develop/test your programs and to send them as jobs to the batch system : %INCLUDE{"Tier3Policies" section="UisPerGroup"}% 1. Login into a =t3ui1*= machine by =ssh= ; use =-Y= or =-X= flag for working with X applications; you might also try to [[UserNxClientInstallation][connect by NX client]], which allows to work efficiently with your graphical applications<pre> ssh -Y username@t3ui12.psi.ch </pre> 1. *If you are an external PSI user you'll have to change your initial password the first time you log in*; simply use the standard =passwd= tool. 1. Copy your grid credentials to the standard places, i.e. to =~/.globus/userkey.pem= and =~/.globus/usercert.pem= and make sure that their files permissions are properly set : <pre> -rw-r--r-- 1 feichtinger cms 2961 Mar 17 2008 usercert.pem -r-------- 1 feichtinger cms 1917 Mar 17 2008 userkey.pem </pre> For details about how to extract those =.pem= files from your CERN User Grid-Certificate please read [[https://gridca.cern.ch/gridca/Help/?kbid=024010]]. 1. Source the grid environment associated to your login shell: <pre> source /swshare/psit3/etc/profile.d/cms_ui_env.sh # for bash source /swshare/psit3/etc/profile.d/cms_ui_env.csh # for tcsh </pre> 1. ( Optional ) Modify your shell init files in order to automatically load the grid environment ; for BASH that means placing :<pre> [ `echo $HOSTNAME | grep t3ui` ] && [ -r /swshare/psit3/etc/profile.d/cms_ui_env.sh ] && source /swshare/psit3/etc/profile.d/cms_ui_env.sh && echo "UI features enabled" </pre> into your =~/.bash_profile= file. 1. Run =env|sort= and verify that =/swshare/psit3/etc/profile.d/cms_ui_env.{sh,csh}= has properly activated the setting <pre>X509_USER_PROXY=/shome/$(id -un)/.x509up_u$(id -u)"</pre> ; that setting is *crucial* to access a CMS Grid SE from your T3 jobs. 1. You must register to the [[https://lcg-voms.cern.ch:8443/vo/cms/vomrs?path=/RootNode][CMS "Virtual Organization"]] service or the following command =voms-proxy-init -voms cms= won't work. [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideLcgAccess#How_to_register_in_the_CMS_VO][CERN details about that]], e.g. who is your representative. 1. Create a proxy certificate for CMS by: <pre> voms-proxy-init -voms cms </pre> If the command =voms-proxy-init -voms cms= will fail then run the command with an additional =-debug= flag, the error message will be usually sufficient for the T3 Admins to troubleshoot the problem.</br> 1. Test your access to the PSI Storage element by the =test-dCacheProtocols= command ; you should get an output like this (possibly without failed test) ; sometime the XROOTD-WAN-* tests might get stuck due to a I/O traffic coming from Internet but as a local T3 user you're actually supposed to use the XROOTD-LAN-* I/O doors that are protected from the Internet users, so you can simply skip the XROOTD-WAN-* tests by either pressing Ctrl-C or by passing the option : %ORANGE%-i "XROOTD-LAN-write"%ENDCOLOR% ( see below )<pre> $ test-dCacheProtocols Test directory: /tmp/dcachetest-20150529-1449-14476 TEST: GFTP-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs gsiftp://t3se01.psi.ch:2811/ TEST: GFTP-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-read ...... [%GREEN%OK%ENDCOLOR%] TEST: DCAP-read ...... [%GREEN%OK%ENDCOLOR%] <-- vs dcap://t3se01.psi.ch:22125/ TEST: SRMv2-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs srm://t3se01.psi.ch:8443/ TEST: SRMv2-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-read ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs root://t3dcachedb03.psi.ch:1094/ <-- Use this if you run LOCAL jobs at T3 and you need root:// access to the T3 files TEST: XROOTD-LAN-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-read ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-LAN-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-write ...... [%GREEN%OK%ENDCOLOR%] <-- vs root://t3se01.psi.ch:1094/ <-- Use this if you run REMOTE jobs and you need root:// access to the T3 files ; e.g. you're working on lxplus TEST: XROOTD-WAN-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-read ...... [%GREEN%OK%ENDCOLOR%] TEST: XROOTD-WAN-rm ...... [%GREEN%OK%ENDCOLOR%] </pre> 1. The =test-dCacheProtocols= tool can be also used to check a *remote* storage element (use the =-h= flag to get more info about it): e.g. to check the CSCS storage element =storage01.lcg.cscs.ch= :<pre> $ test-dCacheProtocols -s storage01.lcg.cscs.ch -x storage01.lcg.cscs.ch -p /pnfs/lcg.cscs.ch/cms/trivcat/store/user/martinel -i "%ORANGE%DCAP-read XROOTD-LAN-write XROOTD-WAN-write%ENDCOLOR%" Test directory: /tmp/dcachetest-20150529-1545-16302 TEST: GFTP-write ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-read ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%DCAP-read%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: SRMv2-write ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-read ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%XROOTD-LAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-LAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: %ORANGE%XROOTD-WAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-WAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) </pre> ---++ Creating an AFS CERN Ticket In order to access the CERN =/afs= protected dirs ( e.g. your home ) you'll need to create a ticket from CERN AFS : <pre> kinit ${Your_CERN_Username}@CERN.CH aklog cern.ch </pre> The first command will provide you a Kerberos ticket while the second command will use the Kerberos ticket to obtain an authentication token from CERN's AFS service ---++ Saving the t3ui1* SSH pub keys into your daily laptop/desktop/server On Internet the hackers are constantly waiting for user mistakes, even a simple misspelled %RED%letter%ENDCOLOR% like this case occurred in 2015 : <pre> $ ssh t3ui02.psi.%RED%s%ENDCOLOR%h The authenticity of host 't3ui02.psi.%RED%s%ENDCOLOR%h (62.210.217.195)' can't be established. RSA key fingerprint is c0:c5:af:36:4b:2d:1f:88:0d:f3:9c:08:cc:87:df:42. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't3ui02.psi.%RED%s%ENDCOLOR%h,62.210.217.195' (RSA) to the list of known hosts. at3user@t3ui02.psi.%RED%s%ENDCOLOR%h's %RED%password%ENDCOLOR%: </pre> %RED%The T3 Admins can't prevent a T3 user from confusing a =.ch= with a =.sh= so pay attention to these cases ! %ENDCOLOR% To avoid mistaking the T3 hostnames you can define these aliases into your shell files, for instance for BASH : </br> %TWISTY% <pre> $ grep alias ~/.bash_profile | grep t3ui alias ui12='ssh -X at3user@t3ui12.psi.ch' alias ui15='ssh -X at3user@t3ui15.psi.ch' alias ui16='ssh -X at3user@t3ui16.psi.ch' alias ui17='ssh -X at3user@t3ui17.psi.ch' alias ui18='ssh -X at3user@t3ui18.psi.ch' alias ui19='ssh -X at3user@t3ui19.psi.ch' </pre> %ENDTWISTY% </br></br> Further attacks are the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attacks]] ; in oder to detect them you have to register in =/$HOME/.ssh/known_hosts= each =t3ui1*= SSH RSA public key by running these steps on each laptop/desktop/server ( also =lxplus= ! ) that you'll usually use to login at T3: <pre> cp -p /$HOME/.ssh/known_hosts /$HOME/.ssh/known_hosts.`date +"%d-%m-%Y"` mkdir /tmp/t3ssh/ for X in 19 18 17 16 15 12 ; do TMPFILE=`mktemp /tmp/t3ssh/XXXXXX` && ssh-keyscan -t rsa t3ui$X.psi.ch,t3ui$X,`host t3ui$X.psi.ch| awk '{ print $4}'` | cat - /$HOME/.ssh/known_hosts | grep -v 'psi\.sh' > $TMPFILE && mv $TMPFILE /$HOME/.ssh/known_hosts ; done rm -rf /tmp/t3ssh for X in 12 15 16 17 18 19 ; do echo -n "# entries for t3ui$X = " ; grep -c t3ui$X /$HOME/.ssh/known_hosts ; grep -Hn --color t3ui$X /$HOME/.ssh/known_hosts ; echo ; done echo done </pre> last =for= reports if there are duplicated rows in =/$HOME/.ssh/known_hosts= for a =t3ui1*= server ; if there are you're suppose to preserve the correct occurrence and delete the others ; to delete you can either use a tool like =sed -i= or simply an editor like =vim= or =emacs= ; once you'll get just one row per =t3ui1*= server run this command and carefully compare your output with this output: </br> %TWISTY% <pre> $ ssh-keygen -l -f /$HOME/.ssh/known_hosts | grep t3ui </pre> | 2048 | d0:9c:a0:e9:8f:9c:3f:b2:f1:88:6c:15:32:07:fc:a0 | t3ui12.psi.ch,t3ui12,192.33.123.132 (RSA) | | 2048 | 77:1b:27:5e:c8:74:64:86:f8:50:f6:58:e6:6f:41:65 | t3ui15.psi.ch,t3ui15,192.33.123.135 (RSA) | | 2048 | 35:bb:d6:be:64:86:8d:db:1d:57:43:ef:05:39:72:c8 | t3ui16.psi.ch,t3ui16,192.33.123.136 (RSA) | | 2048 | 27:d1:57:f0:ac:da:1d:db:54:11:5c:46:4d:93:63:59 | t3ui17.psi.ch,t3ui17,192.33.123.137 (RSA) | | 2048 | b1:56:06:5b:d3:da:1a:79:60:e9:02:16:be:82:fe:f7 | t3ui18.psi.ch,t3ui18,192.33.123.138 (RSA) | | 2048 | 73:fe:97:b2:e7:54:df:99:50:dc:19:3d:6f:cd:01:11 | t3ui19.psi.ch,t3ui19,192.33.123.139 (RSA) | %ENDTWISTY% </br></br> modify your client =/$HOME/.ssh/config= to force the ssh command to *always* check if the server you're connecting to is already reported in the =/$HOME/.ssh/known_hosts= file and to ask your 'ok' for all the servers that are absent : <pre> StrictHostKeychecking %GREEN%ask%ENDCOLOR% </pre> your =/$HOME/.ssh/config= can be more complex than just that line, study the [[http://linux.die.net/man/5/ssh_config][ssh_config man page]] or contact the T3 Admins; ideally you should put =StrictHostKeychecking %GREEN%yes%ENDCOLOR%= but in real life that's impractical. now your ssh client will be able to detect the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attacks]] and if so it will report : <pre> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. </pre> The =t3ui1*= SSH RSA public and private keys will be *never* changed, so the case =It is also possible that the RSA host key has just been changed= will be *never* true. ---++ Installing the CERN CA files into your Web Browser Install and 'trust' any CERN CA file into the Web Browser where it's also loaded your X509 Digital Certificate ( that, in turn, you probably got from CERN as well ) </br> https://cafiles.cern.ch/cafiles/ ---++ Applying for the VOMS Group =/cms/chcms= membership It's available a 'Swiss' VOMS Group =/cms/chcms= to assign more CPU/Storage priority to the community of LHC physicist in Switzerland ; *all the Swiss CMS users have to apply for the VOMS Group =/cms/chcms= membership in order to automatically get* : * higher priority on the T2_CH_CSCS batch queues * additional Jobs slots on the T2_CH_CSCS batch queues * additional =/pnfs= space in the T2_CH_CSCS grid storage * maybe in the future, the same file locking mechanism offered by the PSI T3 When the =/cms/chcms= membership will be granted the =voms-proxy-init --voms cms= command will *transparently* request both the general =/cms/= role and the specific =/cms/chcms= role ; the command output will be : <pre> $ voms-proxy-info --all | grep /cms attribute : /cms/Role=NULL/Capability=NULL attribute : %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% </pre> To apply for the =/cms/chcms= attribute load your X509 into your Web Browser ( but probably it's already there ), click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and ask for the =/cms/chcms= membership ; be aware of that port =:8443=because your Institute network policies might prevent the outgoing traffic vs a such strange TCP port ; if that's the case then escalate the problem to your Institute network team or simply require the =/cms/chcms= membership from another network ( very simple, from your DSL at home ) or also from =lxplus= ---++ The T3 Admins Skype Accounts ( Optional ) In order to both help the users with their daily T3/T2 errors also to support their 'what-if' T3/T2 plans the principal T3 Administrator 'Fabio Martinelli' has created the Skype account =fabio.martinelli_2= ; *all the users are kindly invited to create a Skype account and add him as a contact* Nevertheless his Skype account is meant as a 2nd level of support ; 1st of all *always* send an email to =cms-tier3@lists.psi.ch= describing your error, possibly how to reproduce it and from which UI and providing logs + all the necessary info. ---++ Backup policies Snapshots of the =/shome= files are taken *every hour*, max 36h, and every day, max 10 days. Further details here HowToRetrieveBackupFiles ; NO backups are instead available for the =/scratch= or =/pnfs= files, be careful.
This topic: CmsTier3
>
WebHome
>
HowToSetupYourAccount
Topic revision: r51 - 2016-08-15 - FabioMartinelli
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback