Tags:
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ !!How to access, set up, and test your account %TOC% ---++ Mailing lists and communication with admins and other users * =cms-tier3-users@lists.psi.ch=: list through which we broadcast information (e.g about downtimes). It can also be used for discussions among users (e.g. getting help from other users). *You must subscribe to this list* using [[https://psilists.ethz.ch/sympa/info/cms-tier3-users][its web interface]] ([[https://psilists.ethz.ch/sympa/arc/cms-tier3-users][list archives]]). * =cms-tier3@lists.psi.ch=: Use this list to reach the Tier-3 admins, typically if you have a problem and you need help. What you write to this list is only seen by the administrators Both lists are read by the administrators and are archived. ---++ First Steps on T3 User Interfaces (UI) Three identical User Interface servers ( UIs ) are available for programs development and T3 batch system job submission: %INCLUDE{"Tier3Policies" section="UisPerGroup"}% 1. Use =ssh= to log in to a UI server =t3ui0*= server. You can use =-Y= or =-X= flag if you want to work with graphical X applications. <pre> ssh -Y !username@t3ui02.psi.ch= </pre> 1. *If you are an ETHZ or UniZ user and do not have a regular PSI account, you will have to change your initial password after logging in for the first time. Modify the initial password by using the =passwd= command. 1. In order to work with resources on the WLCG grid you need to have a grid y509 certificate and a matching private key. Copy these credentials to the standard locations of =~/.globus/userkey.pem= and =~/.globus/usercert.pem= and make sure that their permissions are properly set. The user key must NOT be readable by any other user! <pre> chmod 600 userkey.pem chmod 644 usercert.pem </pre> For details about how to extract those =.pem= files from your CERN User Grid-Certificate ( usually a password protected .p12 file ) please follow [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/PersonalCertificate]]. 1. Make sure that your credentials are registered with the CMS Virtual Organization [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/SWGuideLcgAccess#How_to_register_in_the_CMS_VO][CERN details about that]]. Else, the next step will fail. 1. Create you short term credentials in the form of a proxy certificate with CMS extensions (valid for 168 hours): <pre> voms-proxy-init -voms cms --valid 168:00</pre> If the command fails you can run it again adding a =-debug= flag to troubleshoot the problem.</br> 1. Test your access to the PSI Storage element using our =test-dCacheProtocols= testing suite<pre> $ test-dCacheProtocols [feichtinger@t3ui01 ~]$ ./test-dCacheProtocols TEST: GFTP-write ...... [OK] TEST: GFTP-ls ...... [OK] TEST: GFTP-read ...... [OK] TEST: DCAP-read ...... [OK] TEST: XROOTD-LAN-write ...... [OK] TEST: XROOTD-LAN-stat ...... [OK] TEST: XROOTD-LAN-read ...... [OK] TEST: XROOTD-LAN-rm ...... [OK] TEST: XROOTD-WAN-write ...... [OK] TEST: XROOTD-WAN-read ...... [OK] TEST: XROOTD-WAN-rm ...... [OK] TEST: SRMv2-write ...... [OK] TEST: SRMv2-ls ...... [OK] TEST: SRMv2-read ...... [OK] TEST: SRMv2-rm ...... [OK] </pre> * If a test fails, an error message will be written to the screen, and it will point you to a file containing the details of the error. Please send this together with all the information to cms-tier3@lists.psi.ch. * *TIP*: You can use the =-v= (verbose) flag to see the commands that the script executes. This is a good way to learn about the slightly esoteric syntax for interacting with grid storage. If you supply a =-d= flag as well, the tests will not be run, but you will be able to look at all the actions that the script would execute. 1. Test write access to your user area on the storage element. The user area is located underneath =/pnfs/psi.ch/cms/trivcat/store/user= and has by convention your *cms hypernews name* name as directory name. However, due to historic procedures, it might also be that your Tier-3 login name is used for this directory /pnfs/psi.ch/cms/trivcat/store/user/${your_cms_name}. E.g. <pre> test-dCacheProtocols -l /pnfs/psi.ch/cms/trivcat/store/user/feichtinger TEST: GFTP-write ...... [OK] TEST: GFTP-ls ...... [OK] TEST: GFTP-read ...... [OK] TEST: DCAP-read ...... [OK] TEST: XROOTD-LAN-write ...... [OK] TEST: XROOTD-LAN-stat ...... [OK] TEST: XROOTD-LAN-read ...... [OK] TEST: XROOTD-LAN-rm ...... [OK] TEST: XROOTD-WAN-write ...... [OK] TEST: XROOTD-WAN-read ...... [OK] TEST: XROOTD-WAN-rm ...... [OK] TEST: SRMv2-write ...... [OK] TEST: SRMv2-ls ...... [OK] TEST: SRMv2-read ...... [OK] TEST: SRMv2-rm ...... [OK] </pre> 1. For setting up the CMS software environment <pre> export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch/ source ${VO_CMS_SW_DIR}/cmsset_default.sh </pre> ---++ Understanding Tier-3 storage The Tier-3 offers different kinds of storage and it is important that you understand how to use them to their best advantage. * =/t3home/${USER}=: This is your home directory. It is relatively small (10 GB), but it is backed up daily. You should use is for your code, important documents, configuration files, etc. You should not use it for high I/O operations, since this file system is shared on all nodes between all users, it can easily get overloaded by I/O requests, which will typically result in delays (e.g. an =ls= will block for a few seconds before you get the output). * =/scratch${USER}=: Each node, whether worker node or user interface, has a =/scratch= area. This is where you should perform tasks requiring intensive I/O operations . Your batch jobs should produce files in this area on the loca node, and only at the end of the job move the whole file to the final target. * =/work=: This is another shared file system which offers more space. It is implemented through a single storage server, so again, you should not use it for intensive I/O operations * =/pnfs= (the Storage Element): The SE is the main large storage that you can use. It can be accesses in a number of different ways, each by a different protocol. The SEs allow you to transfer large files between sites, but they also provide efficient file access for analysis jobs. The =test-dCacheProtocols= script above tests many of them. * *NFS4.1: NEW in June 2020.* You can now directly access the SE like a normal file system under the path =/pnfs/psi.ch/cms/trivcat/store/user/${USER}=. On the user interfaces the filesystem in mounted in read/write mode, so you can copy files into your area and create new ones. This is what you want to use if you want to read numpy datasets directly via native python from the SE. On the worker nodes the file system can be accessed in read only mode. *Do not run commands like =du= or =find= on this file system:* The area is very large and contains millions of files. Running such commands can take an hour and has an impact on the system. * NOTE: even though it feels like a normal filesystem that can be reached just via a standard path, the underlying storage is not fully POSIX compliant. One of the most pronounced differences is that files are *immutable*, i.e. you can not modify files once they have been created. You cannot open them in "append" mode. But you can delete the file and then replace it by one of the same name. * *gsiftp, SRM, xroot*: These protocols and their associated shell commands (globus-url-copy, gfal*, xrdcp) are useful for copying whole files between sites. * *xroot, dcap*: these protocols allow efficient random access to the files. This is what you can typically use from ROOT. But your application needs to provide support for these protocols. If this is not the case, e.g. when you want to analyze numpy files with native python, then you should use NFS4.1 (which conceptually is the easiest to use, since it mostly feels and behaves like a normal filesystem). ---+++ Backup policies * =/t3home=: Files are back-upped daily and are available through snapshots * =/work=: Files are not back-upped. But they reside on a high quality storage with some redundancy, and snapshots are available. * Storage element (=/pnfs=): Files are not backed up. But they reside on a high quality storage with some reduncancy. Recovering files from snapshots for =/t3home= and =/work= is lined out [[HowToRetrieveBackupFiles][in this article]]. </br> Attention: there are NO backups of =/scratch, /pnfs= ---++ T3 policies Please read and take note of our [[Tier3Policies][Policies]] ---++ Linux groups (partially OBSOLETE - needs to be revised) Each T3 user belongs to both a primary group and a common secondary group %GREEN%cms%ENDCOLOR%, the former is meant to classify common files like the ones downloaded by the [[https://cmsweb.cern.ch/phedex/][PhEDEx]] file transfer service. T3 primary groups are : | *ETHZ* | *UniZ* | *PSI* | | =ethz-ecal= | =uniz-higgs= | =psi-bphys= | | =ethz-bphys= | =uniz-pixel= | =psi-pixel= | | =ethz-ewk= | =uniz-bphys= | | | =ethz-higgs= | | | | =ethz-susy= | | | For instance this is the %BLUE%primary%ENDCOLOR% and the %GREEN%secondary%ENDCOLOR% group of a generic T3 account : <pre> $ id auser uid=571(auser) gid=532(%BLUE%ethz-higgs%ENDCOLOR%) groups=532(%BLUE%ethz-higgs%ENDCOLOR%),500(%GREEN%cms%ENDCOLOR%) </pre> <!-- The following output is a fragment of the private user dirs =/pnfs/psi.ch/cms/trivcat/store/user/= : <pre> $ ls -l /pnfs/psi.ch/cms/trivcat/store/user | grep -v cms total 56 drwxr-xr-x 2 alschmid %ORANGE%uniz-bphys%ENDCOLOR% 512 Feb 21 2013 alschmid drwxr-xr-x 5 amarini %RED%ethz-ewk%ENDCOLOR% 512 Nov 7 15:37 amarini drwxr-xr-x 2 arizzi %BROWN%ethz-bphys%ENDCOLOR% 512 Sep 16 17:49 arizzi drwxr-xr-x 5 bean %TEAL%psi-bphys%ENDCOLOR% 512 Aug 24 2010 bean drwxr-xr-x 5 bianchi %BLUE%ethz-higgs%ENDCOLOR% 512 Sep 9 09:40 bianchi drwxr-xr-x 98 buchmann %PURPLE%ethz-susy%ENDCOLOR% 512 Nov 5 20:36 buchmann ... </pre> --> The T3 groups areas: =/pnfs/psi.ch/cms/trivcat/store/t3groups= ---++ Optional Initial Setups ---+++ local Anaconda/Conda installation One might do following steps to add anaconda: * Only once: <pre> cd /work/${USER}/ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh sh Miniconda3-latest-Linux-x86_64.sh -b -p ./miniconda3 rm Miniconda3-latest-Linux-x86_64.sh </pre> * Every time when using this conda environment: =export PATH=${PWD}/miniconda3/bin:${PATH}= or =export PATH=/work/${USER}/miniconda3/bin:${PATH}= ---+++ Installing the CERN CA files into your Web Browser Install in your Web Browser any [[https://cafiles.cern.ch/cafiles/][CERN CA file]], conversely your Web Browser might constantly bother you about all the CERN =https://= URLs ; typically the Web Browsers feature many well known [[https://en.wikipedia.org/wiki/Certificate_authority][CA files]] by default but not the CERN CA files. ---+++ Applying for the VOMS Group =/cms/chcms= membership It's available a dedicated 'Swiss' VOMS Group called =/cms/chcms= in order to get more rights over the CMS HW resources installed at T2_CH_CSCS, Lugano ; namely : * higher priority on the T2_CH_CSCS batch queues * additional Jobs slots on the T2_CH_CSCS batch queues * additional =/pnfs= space inside the T2_CH_CSCS grid storage * during 2017, a group area like the T3 groups areas =/pnfs/psi.ch/cms/trivcat/store/t3groups/= When a user belongs to the =/cms/chcms= group, and he runs =voms-proxy-init --voms cms=, the =voms-proxy-info --all= will report the new %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% attribute, like : <pre> $ voms-proxy-info --all | grep /cms attribute : /cms/Role=NULL/Capability=NULL attribute : %BLUE%/cms/chcms/Role=NULL/Capability=NULL%ENDCOLOR% </pre> To apply for the =/cms/chcms= membership load your X509 into your daily Web Browser ( probably your X509 is already there ), then click on https://voms2.cern.ch:8443/voms/cms/group/edit.action?groupId=5 and request the =/cms/chcms= membership ; be aware that the port =:8443= might be blocked by your Institute Firewall, if that's the case contact your Firewall team or simply try from another network ( like your net at home )
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r80
|
r73
<
r72
<
r71
<
r70
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r71 - 2020-06-06
-
DerekFeichtinger
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback