Tags:
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ !!How to access, set up, and test your account %TOC% ---++ Compulsory Initial Setups All the documentation is maintained in the T3 twiki pages: https://wiki.chipp.ch/twiki/bin/view/CmsTier3/WebHome . Information about the two T3 mailing-lists: * Subscribe to the =cms-tier3-users@lists.psi.ch= mailing list using [[https://psilists.ethz.ch/sympa/info/cms-tier3-users][its web interface]] ([[https://psilists.ethz.ch/sympa/arc/cms-tier3-users][list archives]]). This mailing list is used to communicate information on Tier-3 matters like downtimes, news, upgrades, etc. and for discussions among users and admins. * To contact the CMS Tier-3 administrators write to =cms-tier3@lists.psi.ch= instead ; no subscription is needed for this mailing list. * Both lists are read by the administrators and are archived. Mails addressed to the =cms-tier3-users@lists.psi.ch= are read by *everyone* so they could get answered better and sooner, especially if you ask about specific CMS software ( CRAB3, CMSSW, Xrootd, ... ) ---+++ T3 policies Read and respect the Tier3Policies ---+++ Linux groups Each T3 user belongs to both a primary group and a common secondary group %GREEN%cms%ENDCOLOR%, the former is meant to classify common files like the ones downloaded by the [[https://cmsweb.cern.ch/phedex/][PhEDEx]] file transfer service. T3 primary groups are : | *ETHZ* | *UniZ* | *PSI* | | =%ORANGE%ethz-ecal%ENDCOLOR%= | =uniz-higgs= | =%TEAL%psi-bphys%ENDCOLOR%= | | =%BROWN%ethz-bphys%ENDCOLOR%= | =%BLUE%uniz-pixel%ENDCOLOR%= | =%PURPLE%psi-pixel%ENDCOLOR%= | | =%RED%ethz-ewk%ENDCOLOR%= | =%ORANGE%uniz-bphys%ENDCOLOR%= | | | =%BLUE%ethz-higgs%ENDCOLOR%= | | | | =%PURPLE%ethz-susy%ENDCOLOR%= | | | <!-- The T3 primary groups allow : * an easier =/pnfs /work /scratch /tmp= space monitoring * an easier batch system monitoring * to protect the =/pnfs= group files because only the group members will be able to delete those files ; very often that's *NOT* guaranteed by the others CMS T1/T2/T3 ! * to manage the groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= ; in a such dir each new file belongs to =root= and to a specific T3 group, it doesn't matter which specific T3 user uploads the files ( but clearly he/she's a group member ) --> For instance this is the %BLUE%primary%ENDCOLOR% and the %GREEN%secondary%ENDCOLOR% group of a generic T3 account : <pre> $ id auser uid=571(auser) gid=532(%BLUE%ethz-higgs%ENDCOLOR%) groups=532(%BLUE%ethz-higgs%ENDCOLOR%),500(%GREEN%cms%ENDCOLOR%) </pre> <!-- The following output is a fragment of the private user dirs =/pnfs/psi.ch/cms/trivcat/store/user/= : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms total 56 drwxr-xr-x 2 alschmid %ORANGE%uniz-bphys%ENDCOLOR% 512 Feb 21 2013 alschmid drwxr-xr-x 5 amarini %RED%ethz-ewk%ENDCOLOR% 512 Nov 7 15:37 amarini drwxr-xr-x 2 arizzi %BROWN%ethz-bphys%ENDCOLOR% 512 Sep 16 17:49 arizzi drwxr-xr-x 5 bean %TEAL%psi-bphys%ENDCOLOR% 512 Aug 24 2010 bean drwxr-xr-x 5 bianchi %BLUE%ethz-higgs%ENDCOLOR% 512 Sep 9 09:40 bianchi drwxr-xr-x 98 buchmann %PURPLE%ethz-susy%ENDCOLOR% 512 Nov 5 20:36 buchmann ... </pre> --> The T3 groups areas: =/pnfs/psi.ch/cms/trivcat/store/t3groups= ---+++ First Steps on T3 User Interfaces (UI) Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission: %INCLUDE{"Tier3Policies" section="UisPerGroup"}% 1. Login into your =t3ui0*= server by =ssh= ; use =-Y= or =-X= flag for working with X applications: =ssh -Y !username@t3ui02.psi.ch= 1. *If you are an external PSI user ( ETHZ, !UniZ, ... ) modify the initial password ASAP from your UI* with =passwd= command. 1. Copy your grid credentials to =~/.globus/userkey.pem= and =~/.globus/usercert.pem= and make sure that their permissions are properly set like : <pre> chmod 400 userkey.pem chmod 400 usercert.pem </pre> For details about how to extract those =.pem= files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate]]. 1. Source the grid environment associated to your login shell: <pre> source /swshare/psit3/etc/profile.d/cms_ui_env.sh # for bash source /swshare/psit3/etc/profile.d/cms_ui_env.csh # for tcsh </pre> In order to automatically load the grid environment, for instance for BASH shell, you might add into your =~/.bash_profile= file :<pre> [ `echo $HOSTNAME | grep t3ui` ] && [ -r /swshare/psit3/etc/profile.d/cms_ui_env.sh ] && source /swshare/psit3/etc/profile.d/cms_ui_env.sh && echo "UI features enabled" </pre> 1. Run =env|sort= and verify that =/swshare/psit3/etc/profile.d/cms_ui_env.{sh,csh}= has properly activated the setting <pre>X509_USER_PROXY=/t3home/$(id -un)/.x509up_u$(id -u)"</pre> ; that setting is *crucial* to access a CMS Grid SE from your T3 jobs. 1. You must be registered to CMS "Virtual Organization" [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideLcgAccess#How_to_register_in_the_CMS_VO][CERN details about that]]. 1. Create a proxy certificate for CMS by: <pre> voms-proxy-init -voms cms </pre> If the command =voms-proxy-init -voms cms= fails then run the command with =-debug= flag to troubleshoot the problem.</br> 1. Test your basic access to the PSI Storage element using our =test-dCacheProtocols= command<pre> $ test-dCacheProtocols Test directory: /tmp/dcachetest-20190215-1649-89361 TEST: GFTP-write ...... [OK] TEST: GFTP-ls ...... [OK] TEST: GFTP-read ...... [OK] TEST: DCAP-read ...... [OK] TEST: SRMv2-write ...... [OK] TEST: SRMv2-ls ...... [OK] TEST: SRMv2-read ...... [OK] TEST: SRMv2-rm ...... [OK] TEST: XROOTD-LAN-write ...... [OK] TEST: XROOTD-LAN-ls ...... [OK] TEST: XROOTD-LAN-read ...... [OK] TEST: XROOTD-LAN-rm ...... [OK] TEST: XROOTD-WAN-write ...... [OK] TEST: XROOTD-WAN-read ...... [OK] TEST: XROOTD-WAN-rm ...... [OK] </pre> * NOTE 1: sometimes the XROOTD-WAN-* tests might get stuck due to exessive I/O traffic over the WAN. Try again. * NOTE 2: You can use the =-v= (verbose) flag to see the commands that the script executes. 1. Test write access to your user area on the storage element. The user area is located underneath =/pnfs/psi.ch/cms/trivcat/store/user= and has your login name as directory name, so <pre> $ test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/$(id -nu) Test directory: /tmp/dcachetest-20190215-1654-89843 TEST: GFTP-write ...... [OK] TEST: GFTP-ls ...... [OK] TEST: GFTP-read ...... [OK] TEST: DCAP-read ...... [OK] TEST: SRMv2-write ...... [OK] TEST: SRMv2-ls ...... [OK] TEST: SRMv2-read ...... [OK] TEST: SRMv2-rm ...... [OK] TEST: XROOTD-LAN-write ...... [OK] TEST: XROOTD-LAN-ls ...... [OK] TEST: XROOTD-LAN-read ...... [OK] TEST: XROOTD-LAN-rm ...... [OK] TEST: XROOTD-WAN-write ...... [OK] TEST: XROOTD-WAN-read ...... [OK] TEST: XROOTD-WAN-rm ...... [OK] </pre> 1. The =test-dCacheProtocols= tool can be also addressed vs a *remote* storage element (use the =-h= flag to get more info about it). Since we are executing the test not locally at CSCS we need to ignore all the tests that only work for local LAN ( -i "%ORANGE%DCAP-read XROOTD-LAN-write XROOTD-WAN-write%ENDCOLOR%" ) ; e.g. to check the CSCS storage element =storage01.lcg.cscs.ch= : <pre> $ test-dCacheProtocols -s storage01.lcg.cscs.ch -x storage01.lcg.cscs.ch -l /pnfs/lcg.cscs.ch/cms/trivcat/store/user/martinel -i "%ORANGE%DCAP-read XROOTD-LAN-write XROOTD-WAN-write%ENDCOLOR%" Test directory: /tmp/dcachetest-20150529-1545-16302 TEST: GFTP-write ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: GFTP-read ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%DCAP-read%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: SRMv2-write ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-ls ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-read ...... [%GREEN%OK%ENDCOLOR%] TEST: SRMv2-rm ...... [%GREEN%OK%ENDCOLOR%] TEST: %ORANGE%XROOTD-LAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-LAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: XROOTD-LAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-LAN-write) TEST: %ORANGE%XROOTD-WAN-write%ENDCOLOR% ...... [%ORANGE%IGNORE%ENDCOLOR%] TEST: XROOTD-WAN-ls ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-read ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) TEST: XROOTD-WAN-rm ...... [SKIPPED] (dependencies did not run: XROOTD-WAN-write) </pre> ---+++ Backup policies Your =/t3home= and =/work= files are backuped daily. </br> Recovering a file details are here HowToRetrieveBackupFiles. </br> There are NO backups of =/tmp /scratch /pnfs= , so pay attention there! ---++ Optional Initial Setups ---+++ Installing the CERN CA files into your Web Browser Install in your Web Browser any [[https://cafiles.cern.ch/cafiles/][CERN CA file]], conversely your Web Browser might constantly bother you about all the CERN =https://= URLs ; typically the Web Browsers feature many well known [[https://en.wikipedia.org/wiki/Certificate_authority][CA files]] by default but not the CERN CA files. ---+++ Applying for the VOMS Group =/cms/chcms= membership It's available a dedicated 'Swiss' VOMS Group called =/cms/chcms= in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely : * higher priority on the T2_CH_CSCS batch queues * additional Jobs slots on the T2_CH_CSCS batch queues * additional =/pnfs= space inside the T2_CH_CSCS grid storage * during 2017, a group area like the T3 groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= When a user belongs to the =/cms/chcms= group, and he runs =voms-proxy-init --voms cms=, the =voms-proxy-info --all= will report the new %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% attribute, like : <pre> $ voms-proxy-info --all | grep /cms attribute : /cms/Role=NULL/Capability=NULL attribute : %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% </pre> To apply for the =/cms/chcms= membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the =/cms/chcms= membership ; be aware that the port =:8443= might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home ) <!-- Saving the UIs SSH pub host keys Hackers are constantly waiting for a user mistake, even a simple misspelled %RED%letter%ENDCOLOR% like in this case occurred in 2015 : <pre> $ ssh t3ui02.psi.%RED%s%ENDCOLOR%h The authenticity of host 't3ui02.psi.%RED%s%ENDCOLOR%h (62.210.217.195)' can't be established. RSA key fingerprint is c0:c5:af:36:4b:2d:1f:88:0d:f3:9c:08:cc:87:df:42. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 't3ui02.psi.%RED%s%ENDCOLOR%h,62.210.217.195' (RSA) to the list of known hosts. at3user@t3ui02.psi.%RED%s%ENDCOLOR%h's %RED%password%ENDCOLOR%: </pre> %RED%The T3 Admins can't prevent a T3 user from confusing a =.ch= with a =.sh= so pay attention to these cases ! %ENDCOLOR% To avoid mistaking the T3 hostnames You can define the following aliases in your shell : </br> <pre> $ grep alias ~/.bash_profile | grep t3ui alias ui01="ssh -X $USER@t3ui01.psi.ch" alias ui02="ssh -X $USER@t3ui02.psi.ch" alias ui03="ssh -X $USER@t3ui03.psi.ch" </pre> </br> Another hackers attack is the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attack]] ; to prevent it proactively save in =/$HOME/.ssh/known_hosts= each =t3ui0*= SSH RSA public key by running these commands on each of your daily laptop/PC/server ( also on =lxplus= ! ) : <pre> cp -p /$HOME/.ssh/known_hosts /$HOME/.ssh/known_hosts.`date +"%d-%m-%Y"` mkdir /tmp/t3ssh/ for X in 01 02 03 ; do TMPFILE=`mktemp /tmp/t3ssh/XXXXXX` && ssh-keyscan -t rsa t3ui$X.psi.ch,t3ui$X,`host t3ui$X.psi.ch| awk '{ print $4}'` | cat - /$HOME/.ssh/known_hosts | grep -v 'psi\.sh' > $TMPFILE && mv $TMPFILE /$HOME/.ssh/known_hosts ; done rm -rf /tmp/t3ssh for X in 01 02 03 ; do echo -n "# entries for t3ui$X = " ; grep -c t3ui$X /$HOME/.ssh/known_hosts ; grep -Hn --color t3ui$X /$HOME/.ssh/known_hosts ; echo ; done echo done </pre> the last =for= reports if there are duplicated rows in =/$HOME/.ssh/known_hosts= for a =t3ui0*= server ; and if there are then you've to preserve the correct occurrence and delete the others ; to delete you can either use =sed -i= or an editor like =vim= / =emacs= / =nano= / =nedit= ; once you'll get just one row per =t3ui0*= server run this command and carefully compare your output with this output: </br> <pre> $ ssh-keygen -l -f /$HOME/.ssh/known_hosts | grep t3ui 2048 SHA256:0Z8Su5R4aZthbePGMM14mEKxYFOuKyrnUe9GjU0m6vM t3ui01.psi.ch,192.33.123.23 (RSA) 2048 SHA256:2qA9YDNeOEbGYjIdpRdBJpywQDne5gRbRvN/myL5P8o t3ui02.psi.ch,192.33.123.29 (RSA) 2048 SHA256:SoIL0H0ueyASNkyYID3a16AIHuAEP7AQ5iaQ6vrvzfk t3ui03.psi.ch,192.33.123.85 (RSA) </pre></br> modify your client =/$HOME/.ssh/config= in order to force the ssh command to *always* check if the server you're connecting to is already reported in the =/$HOME/.ssh/known_hosts= file and to ask for your 'OK' for all the servers that are missing : <pre> StrictHostKeychecking %GREEN%ask%ENDCOLOR% </pre> your =/$HOME/.ssh/config= can be more complex than just that line, study the [[http://linux.die.net/man/5/ssh_config][ssh_config man page]] or contact the T3 Admins; ideally you should put =StrictHostKeychecking %GREEN%yes%ENDCOLOR%= but in real life that's impractical. now your ssh client will be able to detect the [[http://www.vandyke.com/solutions/ssh_overview/ssh_overview_threats.html][SSH man in the middle attacks]] and if so it will report : <pre> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. </pre> The =t3ui0*= SSH RSA public/private keys will *never* change, so the case =It is also possible that the RSA host key has just been changed= will actually *never* occurs. --> ---+++ Creating an AFS CERN Ticket To access the CERN =/afs= protected dirs ( e.g. your CERN home on AFS ) you'll need to create a ticket from CERN AFS : <pre> kinit ${Your_CERN_Username}@CERN.CH aklog cern.ch </pre> The first command will provide you a Kerberos ticket while the second command will use the Kerberos ticket to obtain an authentication token from CERN's AFS service <!-- Web browsing your =/work= files on demand We don't provide a =http{s}://= URL to browse your =/work= logs/errors/programs because there was always a modest interest about a such web portal but you can turn on a private website rooted on an arbitrary %BLUE%dir%ENDCOLOR% of yours by simply using SSH + Python like in the following example ( replace =t3ui02= with your daily =t3ui= server and the %BLUE%dir%ENDCOLOR% with a dir meaningful for your case, for instance %BLUE%~%ENDCOLOR% ): <pre> ssh -L 8000:t3ui02.psi.ch:8000 t3user@t3ui02.psi.ch "killall python ; cd %BLUE%/mnt/t3nfs01/data01/shome/ytakahas/work/TauTau/SFrameAnalysis/Scripts/%ENDCOLOR% && python -m SimpleHTTPServer" </pre> open your Web browser to the page http://localhost:8000/ . That's it. the preliminary =killall python;= command is meant to kill a previous =python -m SimpleHTTPServer= run that might be still active but if you've other =python= programs %RED%they will be also killed%ENDCOLOR% ; in that case delete the initial =killall python;= command and kill a previous =python -m SimpleHTTPServer= command by : <pre> t3ui02 $ kill -9 `pgrep -f "^python -m SimpleHTTPServer*"` </pre> if some other T3 user is already using the =t3ui02.psi.ch:8000= TCP port then use another port like %GREEN%8001%ENDCOLOR%, %GREEN%8002%ENDCOLOR%, etc.. : <pre> ssh -L 8000:t3ui02.psi.ch:%GREEN%8001%ENDCOLOR% t3user@t3ui02.psi.ch "killall python ; cd %BLUE%/mnt/t3nfs01/data01/shome/ytakahas/work/TauTau/SFrameAnalysis/Scripts/%ENDCOLOR% && python -m SimpleHTTPServer %GREEN%8001%ENDCOLOR%" </pre> -->
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r80
|
r67
<
r66
<
r65
<
r64
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r65 - 2019-02-28
-
NinaLoktionova
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback