Tags:
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+!! How to work with Storage Element %TOC% ---++ SE clients Storage data (based on dCache) located under directory /pnfs/psi.ch/cms/trivcat/store/user/%BLUE%username%ENDCOLOR% . </br> Data are accessible by standard *gfal2* (Grid File Access Library), *xrdcp* and *dcap* (obsolete) utilities. On login and compute nodes /pnfs is mounted Read-Only. With /pnfs one can use the common Linux commands =cd=, =ls=, =find=, =du=, =stat=, i.e. meta-data based commands displaying the file list, size, last access time, etc, but *not* file-content commands (it is not possible to =cat= or =grep= a file). </br> ---++*Here are examples how to copy file from dCache to local machine and vice versa:* ---+++ XROOTD examples * *The T3 Xrootd LAN* [[http://xrootd.org/doc/man/xrdfs.1.html][xrdfs]] executed on a UI in the Xrootd LAN service case : <pre> $ xrdfs t3dcachedb03.psi.ch ls -l -u //pnfs/psi.ch/cms/trivcat/store/user/$USER/ ... -rw- 2015-03-15 22:03:41 5356235878 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xroot -rw- 2015-03-15 22:06:04 131870 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xrootd. -rw- 2015-03-15 22:06:45 1580023632 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/%BLUE%ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root%ENDCOLOR% ... </pre> [[http://xrootd.org/doc/man/xrdcp.1.html][xrdcp]] executed on a UI in the Xrootd LAN service case : <pre> $ xrdcp -d 1 root://t3dcachedb.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/$USER/%BLUE%ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root%ENDCOLOR% /dev/null -f [1.472GB/1.472GB][100%][==================================================][94.18MB/s] </pre> * *The T3 Xrootd WAN* Read-Write Xrootd service exposed on Internet ( "WAN" ) is reachable by =root://t3se01.psi.ch:1094//= This service is intentionally NOT connected to the [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookXrootdService][CMS AAA]] =cms-xrd-global.cern.ch= service because of the 2016 CMS policy that excludes both permanently all the T3 sites and dynamically all the misbehaving T1/T2 sites ; =root://t3se01.psi.ch:1094//= is reachable from Internet because it's connected again by [[https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookXrootdService][CMS AAA]] policy to =cms-xrd-transit.cern.ch= as shown by the following example : <pre> $ xrdfs cms-xrd-transit.cern.ch locate /store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root [::192.33.123.24]:1095 Server Read $ host 192.33.123.24 24.123.33.192.in-addr.arpa domain name pointer t3se01.psi.ch. $ xrdcp --force root://cms-xrd-transit.cern.ch//store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root /dev/null [32MB/32MB][100%][==================================================][32MB/s] </pre> ---+++ ROOT examples * *Reading a file in ROOT by xrootd* https://confluence.slac.stanford.edu/display/ds/Using+Xrootd+from+root <pre> $ root -l $ root [1] TFile *_file0 = TFile::Open("root://%BLUE%t3dcachedb03.psi.ch%ENDCOLOR%:1094//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") </pre> * *Reading a file in ROOT by dcap* <pre> $ root -l $ root [1] TFile *_file0 = TFile::Open("dcap://%BLUE%t3se01.psi.ch%ENDCOLOR%:22125//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") </pre> * *Merging online two ROOT files by hadd and gsidcap* To merge *online* two ROOT files located at T3 you can use the ROOT tool =hadd=:</br> %TWISTY% <pre> $ source /swshare/ROOT/root_v5.34.18_slc6_amd64_py26_pythia6/bin/thisroot.sh $ which hadd /swshare/ROOT/root_v5.34.18_slc6_amd64_py26_pythia6/bin/hadd $ hadd %ORANGE%-f0%ENDCOLOR% %GREEN%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root%ENDCOLOR% %BLUE%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_12.root%ENDCOLOR% %BLUE%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_13.root%ENDCOLOR% hadd %ORANGE%Target%ENDCOLOR% file: %GREEN%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root%ENDCOLOR% hadd %ORANGE%Source%ENDCOLOR% file 1: %BLUE%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_12.root%ENDCOLOR% hadd %ORANGE%Source%ENDCOLOR% file 2: %BLUE%gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_13.root%ENDCOLOR% hadd Target path: gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root:/ $ ll /pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root -rw-r--r-- 1 at3user ethz-susy 87M Oct 3 16:41 /pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root </pre>%ENDTWISTY% because of the =gsidcap= protocol that's usually just offered as a LAN protocol in a Tier 1/2/3, you're suppose to run =hadd= from a =t3ui*= server, not =lxplus= or some external UI, and you'll want to merge *online* two ROOT files that are necessarily stored at T3 with the final result again stored at T3. ---+++ GFAL2 examples gfal2 is a CERN standard tools to interact with all the Grid SEs. There are the following gfal2 tools are available *gfal-cat, gfal-copy, gfal-ls, gfal-mkdir, gfal-rm, gfal-save, gfal-sum, gfal-xattr* with corresponding manual page for each of them like *$ man gfal-rm* . We recommend to use =%BLUE%root%ENDCOLOR%://t3dcachedb03.psi.ch= to upload/download a file: <pre> $ gfal-copy --force %BLUE%root%ENDCOLOR%://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ </pre> Some other protocols options: <pre> $ gfal-copy --force %BLUE%gsiftp%ENDCOLOR%://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ <!--$ gfal-copy --force %BLUE%srm%ENDCOLOR%://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ --> $ gfal-copy --force %BLUE%gsidcap%ENDCOLOR%://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ </pre> Further examples : <pre> $ %BLUE%gfal-mkdir%ENDCOLOR% root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/user_id/testdir </pre> <pre> $ %BLUE%gfal-copy%ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:/dev/null -f Copying root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 [DONE] after 0s $ %BLUE%gfal-ls -l%ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user dr-xr-xr-x 0 0 0 512 Feb 21 2013 alschmid ... </pre> Remove a file from dCache: <pre> $ %BLUE%gfal-rm%ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/%GREEN%myfile%ENDCOLOR% </pre> Erasing a remote whole (non-empty) dir with all content: <pre>$ %BLUE%gfal-rm -r %ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/%GREEN%dir-name%ENDCOLOR% </pre> Action of the =%BLUE%gfal-save%ENDCOLOR%= and =%BLUE%gfal-cat%ENDCOLOR%= commands : <pre>$ cat %GREEN%myfile%ENDCOLOR% %ORANGE%Hello T3%ENDCOLOR% $ cat %GREEN%myfile%ENDCOLOR% | %BLUE%gfal-save%ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/%GREEN%myfile%ENDCOLOR% $ %BLUE%gfal-cat%ENDCOLOR% root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/%GREEN%myfile%ENDCOLOR% %ORANGE%Hello T3%ENDCOLOR% </pre> ---+++ DCAP examples To copy a file from the SE to a local disk , for instance, on /scratch, use =dcap= (*the only option to transfer data without grid certificate*) and =gsidcap= protocols. </br>These tools are blocked towards the outside, so you cannot use them from a machine outside of PSI like =lxplus= for downloading files. <pre> dccp dcap://%BLUE%t3se01.psi.ch%ENDCOLOR%:22125//pnfs/psi.ch/cms/testing/test100 /scratch/myfile </pre> <pre> dccp gsidcap://%BLUE%t3se01.psi.ch%ENDCOLOR%:22128/pnfs/psi.ch/cms/testing/test100 /scratch/myfile </pre> <!-- uberftp examples uberftp is a !GridFTP interactive client. * *Interactively accessing !GridFTP server:* <pre> $ uberftp %BLUE%t3se01.psi.ch%ENDCOLOR% 220 GSI FTP door ready 200 User :globus-mapping: logged in UberFTP (2.8)> %BLUE%cd%ENDCOLOR% /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f UberFTP (2.8)> %BLUE%ls%ENDCOLOR% drwx------ 1 martinelli_f martinelli_f 512 Nov 25 2014 sgejob-5939967 ... </pre> * *Listing remote directory or files:* <pre> $ uberftp %BLUE%t3se01.psi.ch%ENDCOLOR% '%BLUE%ls%ENDCOLOR% /pnfs/psi.ch/cms/trivcat/store' $ uberftp -ls gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username </pre> * *Make a new direcory on dCache:* <pre> $ uberftp -mkdir gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/newdirectory </pre> * *Copying file from dCache to local node:* <pre> $ uberftp %BLUE%t3se01.psi.ch%ENDCOLOR% '%BLUE%get%ENDCOLOR% /pnfs/psi.ch/cms/testing/test100' $ uberftp gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms//trivcat/store/user/username/dir-name/somefile.root file:///t3home/username/local-dir/. </pre> * *Copying file from local node to dCache:* <pre> $ uberftp file://$PWD/uftp.root gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/dir </pre> * *Copying a remote dCache dir to local node:* Be aware that's a serial copy, not parallel : <pre> $ uberftp %BLUE%t3se01.psi.ch%ENDCOLOR% '%BLUE%get -r %ENDCOLOR% /pnfs/psi.ch/cms/testing . </pre> * *Remove a file from dCache:* <pre>$ uberftp -rm gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms//trivcat/store/user/username/filelocation </pre> * *Erasing a remote whole (non-empty) dir with all content:* <pre> $ uberftp t3se01.psi.ch '%RED%rm -r%ENDCOLOR% /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/%ORANGE%VHBBHeppyV12%ENDCOLOR%' $ uberftp -rm -r gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/dir-with-content </pre> or in debug mode : <pre> $ uberftp -debug 2 t3se01.psi.ch '%RED%rm -r%ENDCOLOR% /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/%ORANGE%VHBBHeppyV12%ENDCOLOR%' </pre> --> ---++ Setting data access permissions Every user directory on T3 SE /pnfs/psi.ch/cms/trivcat/store/user/%BLUE%username%ENDCOLOR% is set with write permission just to owner; by default even your group members won't be able to alter your own SE dir like shown in the following example: <pre>$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/%BLUE%username%ENDCOLOR% # file : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/%BLUE%username%ENDCOLOR% # owner : 2980 owner:2980:R%GREEN%W%ENDCOLOR%X <---- 2980 is the UID user:2980:R%GREEN%W%ENDCOLOR%X group:500:RX <---- no group write ; 500 is the GID other:RX </pre> if you need to create a =/pnfs= dir where also the group members can write and delete files do the following: <pre>$ srmmkdir srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/%BLUE%TESTDIR%ENDCOLOR% $ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/%BLUE%TESTDIR%ENDCOLOR% $ %RED%srm-set-permissions%ENDCOLOR% -type=ADD -group=R%GREEN%W%ENDCOLOR%X srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/%BLUE%TESTDIR%ENDCOLOR% $ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/%BLUE%TESTDIR%ENDCOLOR% # file : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/%BLUE%TESTDIR%ENDCOLOR% # owner : 2980 owner:2980:R%GREEN%W%ENDCOLOR%X user:2980:R%GREEN%W%ENDCOLOR%X group:500:R%GREEN%W%ENDCOLOR%X <---- now your group members can write files and dirs other:RX </pre> ---++ Getting data from remote SEs to the T3 SE ---+++ Official datasets For official data sets/blocks that are *registered* in CMS DBS you *must* use the [[HowToOrderData][PhEDEx system]]. ---+++ Private datasets The recommended way to transfer private datasets (non-CMSSW ntuples) between sites is the File Transfer System (FTS) [[http://fts3-docs.web.cern.ch/fts3-docs/docs/cli/cli.html][documentation]]. In a nuthsell, you need to prepare a file that contains lines like =protocol://source protocol://destination=. An example file is the following: <pre> gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root </pre> Then, you can submit the transfer with <pre> $ fts-transfer-submit -s https://fts3-pilot.cern.ch:8446 -f files.txt a360e11e-ab3b-11e6-8fe7-02163e00a39b </pre> You will get back an ID string, which you can use to monitor your transfer on the site https://fts3-pilot.cern.ch:8449/fts3/ftsmon/# The transfer will proceed in parallel, you can also specify resubmission options for failed jobs. The site prefixes can typically be found out from existing transfer logs on the grid or by inspecting the files in <pre> /cvmfs/cms.cern.ch/SITECONF/T2_CH_CSCS/JobConfig/site-local-config.xml /cvmfs/cms.cern.ch/SITECONF/T2_CH_CSCS/PhEDEx/storage.xml </pre> Some useful examples are below: <pre> gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat gsiftp://eoscmsftp.cern.ch//eos/cms/ </pre> ---++ Job Stageout from other remote sites You can try to stageout your CRAB3 Job outputs directly a T3_CH_PSI but if these transfers will get too slow and/or unreliable than stageout first at T2_CH_CSCS and afterwards copy your files at T3_CH_PSI by the Leonardo Sala's [[https://twiki.cern.ch/twiki/bin/view/Main/LSDataReplica][data_replica.py]] or whatever else Grid tool able to transfers files in parallel between 2 sites. <!-- ---++ =/pnfs= dirs cleanup ---+++ T3_CH_PSI Each T3 user *must* to remove his/her old dir from =/pnfs= ; in order to quickly select and delete the unnecessary dirs, login on a UI ( it's crucial for the correct $USER resolution ) and prepare the file =/scratch/$USER/recursive.rm.pnfs= : <pre> # Extract your /pnfs dirs and save them in /scratch/$USER/recursive.rm.pnfs $ curl http://t3mon.psi.ch/PSIT3-custom/v_pnfs_top_dirs.txt 2> /dev/null | egrep $USER | awk {' print "uberftp t3se01.psi.ch \047rm -r "$15"\047"}' > /scratch/$USER/recursive.rm.pnfs # Erase in /scratch/$USER/recursive.rm.pnfs all the /pnfs dirs that you want to PRESERVE !! All the remaining /pnfs dirs will be recursively DELETED !! There are NO backups for /pnfs files !! $ vim /scratch/$USER/recursive.rm.pnfs # Run the recursive deletions ; a CMS proxy is needed $ source /scratch/$USER/recursive.rm.pnfs </pre> You might start with a small fragment of /scratch/$USER/recursive.rm.pnfs and check how it behaves before to run a big deletions campaign. T2_CH_CSCS Regularly monitor your disk usage <pre> curl http://ganglia.lcg.cscs.ch/ganglia/files_cms.html | grep $USER </pre> or using the [[https://wiki.chipp.ch/twiki/pub/CmsTier3/HowToAccessSe/T2_Storage.py.txt][T2_Storage.py]] notebook/python script. In order to clean up, use uberftp as explained above. --> ---++[[BasicUnderstandingOfTheDCacheForAdvancedUser][Understanding of dCache for advanced user]]
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
txt
T2_Storage.py.txt
r1
manage
1.6 K
2017-01-30 - 10:49
JoosepPata
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r145
<
r144
<
r143
<
r142
<
r141
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r142 - 2019-09-05
-
NinaLoktionova
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback