create new tag
view all tags

How to work with the SE

SE clients

Storage data (based on dCache) can be found under directory /pnfs and accessed using dcap, xrootd and gridftp (gsiftp) protocols.

Every user has a directory on T3 SE as /pnfs/psi.ch/cms/trivcat/store/user/username with write permission just to owner; by default even your group members won't be able to alter your own SE dir like shown in the following example:

$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username
# owner : 2980
owner:2980:RWX    <---- 2980 is the UID
group:500:RX   <---- no group write ; 500 is the GID

if you need to create a /pnfs dir where also the group members can write and delete files you can proceed in this way:

$ srmmkdir srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR

$ srm-set-permissions -type=ADD -group=RWX srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR

$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# owner : 2980
group:500:RWX   <---- now your group members can write files and dirs  

On the t3ui* t3wn* servers /pnfs is mounted Read-Only. One can use the common Linux commands cd, ls, find, du, stat (i.e. meta-data based commands displaying the file list, file size, last access time, etc), but no file-content commands ( it is not possible to cat or grep a file). A couple of find /pnfs/psi.ch/cms/ examples :

  • find /pnfs/psi.ch/cms/ -atime +50 -iname *root -uid `id -u $USER`
  • find /pnfs/psi.ch/cms/ -atime +50 -type d -uid `id -u $USER`

To copy a file from the SE to a local disk , for instance, on /scratch, use dcap (valid without grid certificate) and gsidcap protocols. These tools are blocked towards the outside, so you cannot use them from a machine outside of PSI like lxplus for downloading files:

dccp dcap://t3se01.psi.ch:22125//pnfs/psi.ch/cms/testing/test100  /scratch/myfile

dccp gsidcap://t3se01.psi.ch:22128/pnfs/psi.ch/cms/testing/test100  /scratch/myfile

XROOTD usage examples

  • The T3 Xrootd LAN

xrdfs executed on a UI in the Xrootd LAN service case :

$ xrdfs t3dcachedb03.psi.ch ls -l -u //pnfs/psi.ch/cms/trivcat/store/user/$USER/
-rw- 2015-03-15 22:03:41  5356235878 root://
-rw- 2015-03-15 22:06:04      131870 root://
-rw- 2015-03-15 22:06:45  1580023632 root://

xrdcp executed on a UI in the Xrootd LAN service case :

$ xrdcp -d 1 root://t3dcachedb.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/$USER/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root /dev/null -f 

  • The T3 Xrootd WAN

Read-Write Xrootd service exposed on Internet ( "WAN" ) is reachable by root://t3se01.psi.ch:1094//

This service is intentionally NOT connected to the CMS AAA cms-xrd-global.cern.ch service because of the 2016 CMS policy that excludes both permanently all the T3 sites and dynamically all the misbehaving T1/T2 sites ; root://t3se01.psi.ch:1094// is reachable from Internet because it's connected again by CMS AAA policy to cms-xrd-transit.cern.ch as shown by the following example :

$ xrdfs cms-xrd-transit.cern.ch locate /store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root
[::]:1095 Server Read

$ host domain name pointer t3se01.psi.ch.

$ xrdcp --force  root://cms-xrd-transit.cern.ch//store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root /dev/null   

ROOT examples

  • Reading a file in ROOT by xrootd
$ root -l
$ root [1] TFile *_file0 = TFile::Open("root://t3dcachedb03.psi.ch:1094//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") 
  • Reading a file in ROOT by dcap
$ root -l
$ root [1] TFile *_file0 = TFile::Open("dcap://t3se01.psi.ch:22125//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") 
  • Merging online two ROOT files by hadd and gsidcap
To merge online two ROOT files located at T3 you can use the ROOT tool hadd:
More... Close
$ source

$ which hadd

$ hadd -f0
hadd Target file:
hadd Source file 1:
hadd Source file 2:
hadd Target path:

$ ll /pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root
-rw-r--r-- 1 at3user ethz-susy 87M Oct  3 16:41

because of the gsidcap protocol that's usually just offered as a LAN protocol in a Tier 1/2/3, you're suppose to run hadd from a t3ui* server, not lxplus or some external UI, and you'll want to merge online two ROOT files that are necessarily stored at T3 with the final result again stored at T3.

gfal-* tools

If your jobs are still working fine with the previous lcg- tools then you can use those until they will work ; the T3 Admins won't debug your lcg- tools errors though


  • gfal2.GError: Unable to open the /usr/lib64/gfal2-plugins//libgfal_plugin_http.so plugin specified in the plugin directory, failure : /usr/lib64/libdavix_copy.so.0: undefined symbol: _ZNK5Davix13RequestParams11getCopyModeEv
  • Use "env -i X509_USER_PROXY=~/.x509up_u`id -u` gfal-command XXX" or LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH gfal-ls gsiftp://t3se01.psi.ch/pnfs/

In 2014 CERN released the gfal-* CLIs and APIs as its new standard toolset to interact with all the Grid SEs and their several Grid protocols ; there is a talk about that ; Since the gfal-* tools are designed to be MULTI protocol, so you can upload/download a file in several ways :

$ gfal-copy --force root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/    
$ gfal-copy --force gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/
$ gfal-copy --force srm://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/
$ gfal-copy --force gsidcap://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ 
if you're in doubt about what to use then use root://t3dcachedb03.psi.ch and ignore gsiftp srm gsidcap
The following gfal-* tools are available both on the UIs and on the WNs servers : gfal-cat, gfal-copy, gfal-ls, gfal-mkdir, gfal-rm, gfal-save, gfal-sum, gfal-xattr.

It's available a man page for each of them, e.g. :

$ man gfal-rm 

The following session shows in action the gfal-save and gfal-cat commands :

$ cat myfile
Hello T3
$ cat myfile | gfal-save root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
$ gfal-cat root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
Hello T3

Further examples :

$ gfal-copy root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:/dev/null -f
Copying root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11   [DONE]  after 0s 

$ gfal-ls root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user -l
dr-xr-xr-x   0 0     0           512 Feb 21  2013 alschmid	
dr-xr-xr-x   0 0     0           512 Mar  8 14:31 amarini	
dr-xr-xr-x   0 0     0           512 May 12  2015 andis	

$ gfal-rm root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile

uberftp examples

uberftp is a GridFTP interactive client.

  • Interactively accessing a GridFTP server
$ uberftp t3se01.psi.ch
220 GSI FTP door ready
200 User :globus-mapping: logged in
UberFTP (2.8)> cd /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f
UberFTP (2.8)> ls
drwx------  1 martinelli_f martinelli_f          512 Nov 25  2014 sgejob-5939967
drwx------  1 martinelli_f martinelli_f          512 Nov 25  2014 sgejob-5939965
  • Listing remote directory or files
uberftp t3se01.psi.ch 'ls /pnfs/psi.ch/cms/trivcat/store'  
220 GSI FTP door ready 200 PASS command successful drwx------  1  
cmsuser  cmsuser  512 Apr 15 13:18  mc drwx------  1  
cmsuser  cmsuser  512 Aug 11  2009  relval drwx------  1  
cmsuser  cmsuser  512 Oct  2  2009  PhEDEx_LoadTest07 drwx------  1  
cmsuser  cmsuser  512 Jun 17 12:19  data drwx------  1  
cmsuser  cmsuser  512 Jun  2 15:54  user drwx------  1  
cmsuser  cmsuser  512 May 10  2009  unmerged 
  • Copying locally a remote file
uberftp t3se01.psi.ch 'get /pnfs/psi.ch/cms/testing/test100' 
  • Copying locally a remote dir
Be aware that's a serial copy, not parallel :
uberftp t3se01.psi.ch 'get -r  /pnfs/psi.ch/cms/testing .' 
  • Erasing a remote dir
uberftp t3se01.psi.ch 'rm -r /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12'
or in debug mode :
uberftp  -debug 2 t3se01.psi.ch 'rm -r /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12'

Getting data from remote SEs to the T3 SE

Official datasets

For official data sets/blocks that are registered in CMS DBS you must use the PhEDEx system.

Private datasets

The recommended way to transfer private datasets (non-CMSSW ntuples) between sites is the File Transfer System (FTS) documentation. In a nuthsell, you need to prepare a file that contains lines like protocol://source protocol://destination.

An example file is the following:

gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root
gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root

Then, you can submit the transfer with

$ fts-transfer-submit -s https://fts3-pilot.cern.ch:8446 -f files.txt

You will get back an ID string, which you can use to monitor your transfer on the site https://fts3-pilot.cern.ch:8449/fts3/ftsmon/#

The transfer will proceed in parallel, you can also specify resubmission options for failed jobs.

The site prefixes can typically be found out from existing transfer logs on the grid or by inspecting the files in


Some useful examples are below:


Job Stageout from other remote sites

You can try to stageout your CRAB3 Job outputs directly a T3_CH_PSI but if these transfers will get too slow and/or unreliable than stageout first at T2_CH_CSCS and afterwards copy your files at T3_CH_PSI by the Leonardo Sala's data_replica.py or whatever else Grid tool able to transfers files in parallel between 2 sites.

Understanding of dCache for advanced user

Topic attachments
I Attachment History Action Size Date Who Comment
Texttxt T2_Storage.py.txt r1 manage 1.6 K 2017-01-30 - 10:49 JoosepPata  
Edit | Attach | Watch | Print version | History: r137 < r136 < r135 < r134 < r133 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r137 - 2018-11-22 - NinaLoktionova
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback