Tags:
create new tag
view all tags

How to work with Storage Element

SE clients

Storage data (based on dCache) located under directory /pnfs/psi.ch/cms/trivcat/store/user/username .
Data are accessible by standard gfal2 (Grid File Access Library), xrdcp and dcap (obsolete) utilities.

On login and compute nodes /pnfs is mounted Read-Only. With /pnfs one can use the common Linux commands cd, ls, find, du, stat, i.e. meta-data based commands displaying the file list, size, last access time, etc, but not file-content commands (it is not possible to cat or grep a file).

Here are examples how to copy file from dCache to local machine and vice versa:

XROOTD LAN (local area network, for local access from UI and worker nodes)

xrdfs executed on a UI in the Xrootd LAN service case :

$ xrdfs t3dcachedb03.psi.ch ls -l -u //pnfs/psi.ch/cms/trivcat/store/user/$USER/
...
-rw- 2015-03-15 22:03:41  5356235878 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xroot
-rw- 2015-03-15 22:06:04      131870 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xrootd.
-rw- 2015-03-15 22:06:45  1580023632 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root
...

xrdcp executed on a UI in the Xrootd LAN service case :

$ xrdcp -d 1 root://t3dcachedb.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/$USER/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root /dev/null -f 
[1.472GB/1.472GB][100%][==================================================][94.18MB/s]  

XROOTD WAN (wide area network, access from outside of Tier-3, stage-in / stage-out)

The Read-Write Xrootd service reachable by root://t3se01.psi.ch:1094//

Do NOT use this service for local analysis jobs. We limit the number of parallel transfers through this door, since it usually should only be used by efficient WAN copies, i.e. transfers of large files with high bandwidths, because too many of these transfers could harm the availability of the tier-3's small number of storage servers. If you use this door for your analysis jobs you will find that many of them will get queued.

$ xrdfs cms-xrd-transit.cern.ch locate /store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root
[::192.33.123.24]:1095 Server Read

$ host 192.33.123.24
24.123.33.192.in-addr.arpa domain name pointer t3se01.psi.ch.

$ xrdcp --force  root://cms-xrd-transit.cern.ch//store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root /dev/null   
[32MB/32MB][100%][==================================================][32MB/s]  

ROOT examples

  • Reading a file in ROOT by xrootd
https://confluence.slac.stanford.edu/display/ds/Using+Xrootd+from+root
$ root -l
$ root [1] TFile *_file0 = TFile::Open("root://t3dcachedb03.psi.ch:1094//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") 

GFAL2 examples

The gfal2 tools offer a wide range of utilities gfal-cat, gfal-copy, gfal-ls, gfal-chmod, gfal-mkdir, gfal-rm, gfal-save, gfal-sum, gfal-xattr with corresponding manual page for each of them like $ man gfal-rm .

Example usage

$ gfal-copy --force root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/  
  

$ gfal-mkdir  root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/user_id/testdir
$ gfal-copy root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:/dev/null -f
Copying root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11   [DONE]  after 0s 

$ gfal-ls -l root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user 
dr-xr-xr-x   0 0     0           512 Feb 21  2013 alschmid	
...
Remove a file from dCache:
$ gfal-rm root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile

Erasing a remote whole (non-empty) directory recursively:

$ gfal-rm -r  root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/dir-name 

Action of the gfal-save and gfal-cat commands :

$ cat myfile
Hello T3
$ cat myfile | gfal-save root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
$ gfal-cat root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
Hello T3

DCAP examples - OBSOLETE

Don't use this any more. The dcap protocol is obsolete and not supported any more at many dcache sites.

To copy a file from the SE to a local disk , for instance, on /scratch, use dcap (the only option to transfer data without grid certificate) and gsidcap protocols.
These tools are blocked towards the outside, so you cannot use them from a machine outside of PSI like lxplus for downloading files.

dccp dcap://t3se01.psi.ch:22125//pnfs/psi.ch/cms/testing/test100  /scratch/myfile

dccp gsidcap://t3se01.psi.ch:22128/pnfs/psi.ch/cms/testing/test100  /scratch/myfile

Getting data from remote SEs to the T3 SE

Official datasets

For official data sets/blocks that are registered in CMS DBS you must use the Rucio system.

Job Stageout from other remote sites

You can try to stageout your CRAB3 Job outputs directly a T3_CH_PSI but if these transfers will get too slow and/or unreliable than stageout first at T2_CH_CSCS and afterwards copy your files to T3_CH_PSI.

Topic attachments
I Attachment History ActionSorted descending Size Date Who Comment
Texttxt T2_Storage.py.txt r1 manage 1.6 K 2017-01-30 - 10:49 JoosepPata  
Edit | Attach | Watch | Print version | History: r145 < r144 < r143 < r142 < r141 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r145 - 2024-05-01 - DerekFeichtinger
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback