How to work with the SE

SE clients

Storage data (based on dCache) can be found under directory /pnfs and accessed using dcap, xrootd and gridftp (gsiftp) protocols.

Read-Only NFSv3 /pnfs

On the t3ui* t3wn* servers it's mounted Read-Only the namespace /pnfs in order to easily use the common Linux commands cd, ls, find, du, stat and so on. A couple of find /pnfs/psi.ch/cms/ examples :
  • find /pnfs/psi.ch/cms/ -atime +50 -iname *root -uid `id -u $USER`
  • find /pnfs/psi.ch/cms/ -atime +50 -type d -uid `id -u $USER`

Note that only meta-data based commands will work (e.g. displaying the file list, file size, last access time, etc.) on /pnfs but no file-content commands ( it is not possible to cat or grep a file). There is however a special command to copy a file to a local file system like /scratch called dccp. Use it like this :

dccp /pnfs/psi.ch/cms/trivcat/store/user/path/to/somefile.root /scratch/$USER/

XROOTD

T3 SE is accessible by several Grid tools. This is flexible but it's also a source of confusion, so that if you don't have a motivation to use more than one tool or protocol than always use root://t3dcachedb.psi.ch:1094// and the tools xrdfs xrdcp . You might follow XROOTD knowledge in the larger CMS AAA context.

The T3 Xrootd WAN service - I/O queue xrootd

  • It's a Read-Write Xrootd service exposed on Internet ( "WAN" ) reachable by root://t3se01.psi.ch:1094//
  • MAX 4 active connections are allowed in each of its dedicated dCache I/O queues xrootd
  • Only the /pnfs/psi.ch/cms/trivcat/ namespace subset is available but in practical terms this is not a limitation though.
  • Be aware that this service is intentionally NOT connected to the CMS AAA cms-xrd-global.cern.ch service because of the 2016 CMS policy that excludes both permanently all the T3 sites and dynamically all the misbehaving T1/T2 sites ; root://t3se01.psi.ch:1094// is reachable from Internet because it's connected again by CMS AAA policy to cms-xrd-transit.cern.ch as shown by the following example :
$ xrdfs cms-xrd-transit.cern.ch locate /store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root
[::192.33.123.24]:1095 Server Read

$ host 192.33.123.24
24.123.33.192.in-addr.arpa domain name pointer t3se01.psi.ch.

$ xrdcp --force  root://cms-xrd-transit.cern.ch//store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root /dev/null   
[32MB/32MB][100%][==================================================][32MB/s]  

The T3 Xrootd LAN service - I/O queue regular ( DEFAULT service to be used )

  • It's a Read-Write Xrootd service unexposed on Internet ( "LAN" ) reachable by root://t3dcachedb.psi.ch:1094//
  • MAX 100 active connections are allowed in each of dCache I/O queues regular, in constant competition with the dcap and gsidcap Active/Max/Queued connections.
  • The full T3 /pnfs namespace is available.

xrdfs executed on a UI in the Xrootd LAN service case :

$ xrdfs t3dcachedb03.psi.ch ls -l -u //pnfs/psi.ch/cms/trivcat/store/user/$USER/
...
-rw- 2015-03-15 22:03:41  5356235878 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xroot
-rw- 2015-03-15 22:06:04      131870 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xrootd.
-rw- 2015-03-15 22:06:45  1580023632 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root
...

xrdcp executed on a UI in the Xrootd LAN service case :

$ xrdcp -d 1 root://t3dcachedb.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/$USER/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root /dev/null -f 
[1.472GB/1.472GB][100%][==================================================][94.18MB/s]  

dcap and gsidcap - I/O queue regular ( legacy tools )

dcap and gsidcap are a fast method to copy a file from the SE to a local disk on the T3. The protocol is blocked towards the outside, so you cannot use it from a machine outside of PSI ( like lxplus ) for downloading files.
dccp dcap://t3se01.psi.ch:22125//pnfs/psi.ch/cms/testing/test100  /tmp/myfile
you can't alter /pnfs by dcap ; to modify pnfs use gsidcap instead :

dccp gsidcap://t3se01.psi.ch:22128/pnfs/psi.ch/cms/testing/test100  /tmp/myfile

ROOT

Using the standalone ROOT installations ( legacy )

See HowToWorkInCmsEnv#Using_StandAlone_ROOT_by_swshare

Reading a file in ROOT by xrootd - I/O queue regular

https://confluence.slac.stanford.edu/display/ds/Using+Xrootd+from+root
$ root -l
$ root [1] TFile *_file0 = TFile::Open("root://t3dcachedb03.psi.ch:1094//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") 

Reading a file in ROOT by dcap - I/O queue regular ( legacy )

$ root -l
$ root [1] TFile *_file0 = TFile::Open("dcap://t3se01.psi.ch:22125//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root") 

Merging online two ROOT files by hadd and gsidcap - I/O queue regular

To merge online two ROOT files located at T3 you can use the ROOT tool hadd:
More... Close
$ source
/swshare/ROOT/root_v5.34.18_slc6_amd64_py26_pythia6/bin/thisroot.sh

$ which hadd
/swshare/ROOT/root_v5.34.18_slc6_amd64_py26_pythia6/bin/hadd

$ hadd -f0
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_12.root
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_13.root
hadd Target file:
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root
hadd Source file 1:
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_12.root
hadd Source file 2:
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/babies/QCD-Pt300to475/QCD_Pt300to470_PU_S14_POSTLS170/treeProducerSusyFullHad_tree_13.root
hadd Target path:
gsidcap://t3se01:22128/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root:/

$ ll /pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root
-rw-r--r-- 1 at3user ethz-susy 87M Oct  3 16:41
/pnfs/psi.ch/cms/trivcat/store/user/at3user/merged.root

because of the gsidcap protocol that's usually just offered as a LAN protocol in a Tier 1/2/3, you're suppose to run hadd from a t3ui* server, not lxplus or some external UI, and you'll want to merge online two ROOT files that are necessarily stored at T3 with the final result again stored at T3.

gfal-* tools

If your jobs are still working fine with the previous lcg- tools then you can use those until they will work ; the T3 Admins won't debug your lcg- tools errors though

Troubleshooting:

  • gfal2.GError: Unable to open the /usr/lib64/gfal2-plugins//libgfal_plugin_http.so plugin specified in the plugin directory, failure : /usr/lib64/libdavix_copy.so.0: undefined symbol: _ZNK5Davix13RequestParams11getCopyModeEv
  • Use "env -i X509_USER_PROXY=~/.x509up_u`id -u` gfal-command XXX" or LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH gfal-ls gsiftp://t3se01.psi.ch/pnfs/

In 2014 CERN released the gfal-* CLIs and APIs as its new standard toolset to interact with all the Grid SEs and their several Grid protocols ; there is a talk about that ; Since the gfal-* tools are designed to be MULTI protocol, so you can upload/download a file in several ways :

$ gfal-copy --force root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/    
$ gfal-copy --force gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/
$ gfal-copy --force srm://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/
$ gfal-copy --force gsidcap://t3se01.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/ 
if you're in doubt about what to use then use root://t3dcachedb03.psi.ch and ignore gsiftp srm gsidcap
The gfal-* tools are available both on the UIs and on the WNs servers ( you don't have to specify /usr/bin/ ):
$ /usr/bin/gfal-cat
$ /usr/bin/gfal-copy
$ /usr/bin/gfal-ls
$ /usr/bin/gfal-mkdir
$ /usr/bin/gfal-rm
$ /usr/bin/gfal-save
$ /usr/bin/gfal-sum
$ /usr/bin/gfal-xattr

It's available a man page for each of them ; e.g. : $ man gfal-rm

The following session shows in action the gfal-save and gfal-cat commands :

$ cat myfile
Hello T3
$ cat myfile | gfal-save root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
$ gfal-cat root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
Hello T3

Further examples :

$ gfal-copy root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:/dev/null -f
Copying root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11   [DONE]  after 0s 

$ gfal-ls root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user -l
dr-xr-xr-x   0 0     0           512 Feb 21  2013 alschmid	
dr-xr-xr-x   0 0     0           512 Mar  8 14:31 amarini	
dr-xr-xr-x   0 0     0           512 May 12  2015 andis	
...

$ gfal-rm root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile

gfalFS

The toolgfalFS allows to mount a GridFTP server, or a SRM server, as a local dir ; it's useful to browse and remove dirs from that GridFTP server or to remotely access a log file without downloading it.

Where to work on your t3ui :

$ pwd
/scratch/martinelli_f <---- use your account name

Making a local dir to mount the GridFTP directory :

$ mkdir t3

Mounting the GridFTP directory :

$ gfalFS t3 gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/martinelli_f   <---- use your account name instead

Testing if gfalFS is working :

$ ls -l t3

Removing a file <--- pay the greatest attention when you're deleting !!! The PSI T3 will protect your files but CSCS and all the other CMS Grid centres are very tolerant in terms of file deletion !

$ rm -f t3/sgejob-5939967/mybigfile
$ echo $?
0

Uploading a local file into the GridFTP directory by gfalFS :

$ cp /etc/hosts t3/

Transparent remote I/O ; don't do that for files >1GB but it's useful for log files :

$ cat t3/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Deleting several GridFTP directories and files : <--- pay the greatest attention when you delete !!!

$ rm -rf t3/sgejob-59399*
$ echo $?
0

Unmounting the GridFTP directory :

$ gfalFS_umount -z t3
$ echo $?
0

EXACTLY LIKE THE PREVIOUS EXAMPLE BUT EXECUTED vs THE CSCS T2 :

$ pwd
/scratch/martinelli_f   <---- use your account name

$ mkdir t2

$ gfalFS t2 gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/$USER   

$ ls -l t2

...

uberftp - I/O queue wan

uberftp is a GridFTP interactive client :

Interactively accessing a GridFTP server

$ uberftp t3se01.psi.ch
220 GSI FTP door ready
200 User :globus-mapping: logged in
UberFTP (2.8)> cd /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f
UberFTP (2.8)> ls
drwx------  1 martinelli_f martinelli_f          512 Nov 25  2014 sgejob-5939967
drwx------  1 martinelli_f martinelli_f          512 Nov 25  2014 sgejob-5939965
...

Listing remote directory or files

uberftp t3se01.psi.ch 'ls /pnfs/psi.ch/cms/trivcat/store'  
220 GSI FTP door ready 200 PASS command successful drwx------  1  
cmsuser  cmsuser  512 Apr 15 13:18  mc drwx------  1  
cmsuser  cmsuser  512 Aug 11  2009  relval drwx------  1  
cmsuser  cmsuser  512 Oct  2  2009  PhEDEx_LoadTest07 drwx------  1  
cmsuser  cmsuser  512 Jun 17 12:19  data drwx------  1  
cmsuser  cmsuser  512 Jun  2 15:54  user drwx------  1  
cmsuser  cmsuser  512 May 10  2009  unmerged 

Copying locally a remote file

uberftp t3se01.psi.ch 'get /pnfs/psi.ch/cms/testing/test100' 

Copying locally a remote dir

Be aware that's a serial copy, not parallel :
uberftp t3se01.psi.ch 'get -r  /pnfs/psi.ch/cms/testing .' 

Erasing a remote dir

uberftp t3se01.psi.ch 'rm -r /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12'
or in debug mode :
uberftp  -debug 2 t3se01.psi.ch 'rm -r /pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12'

globus-url-copy - I/O queue wan

Copying a dir between two GridFTP server - serial method

The globus-url-copy tool can copy file, files and recursively ( but serially ) a whole dir from a GridFTP server to another ; the file transfer will occur directly between the two GridFTP servers ; you'll have to know the absolute paths both on the sender and the receiver side ; in the next example we're going to copy the dir :
  • gsiftp://stormgf2.pi.infn.it/gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV12/
  • into :
  • gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12/
the path prefix /gpfs/ddn/srm/cms/ has been discovered by a uberftp gsiftp://stormgf2.pi.infn.it session ; if you're in doubt contact the T3 administrators and we'll help you to identify this kind of prefixes ; at T3 / T2 the absolute paths are always respectively /pnfs/psi.ch/cms and /pnfs/lcg.cscs.ch/cms

the dir copy example :

$ globus-url-copy -continue-on-error -rst -nodcau -fast -vb -v -cd -r gsiftp://stormgf2.pi.infn.it/gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV12/ gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12/

Source: gsiftp://stormgf2.pi.infn.it/gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV12/
Dest:   gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12/
  DYJetsToLL_M-50_HT-100to200_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/

Source: gsiftp://stormgf2.pi.infn.it/gpfs/ddn/srm/cms/store/user/arizzi/VHBBHeppyV12/
Dest:   gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/VHBBHeppyV12/
  DYJetsToLL_M-50_HT-200to400_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/
...

Copying a dir between two GridFTP servers by GNU parallel

The tools globus-url-copy, uberftp, GNU parallel can be used together to copy, in parallel, a dir between two GridFTP servers, in this example a C.Galloni /pnfs dir into a MDefranc /pnfs dir ; no files will be routed trough the server running the globus-url-copy commands itself ( e.g. your UI, or a WN ) ; furthermore, since in a Grid environment each GridFTP server often acts as a transparent proxy to more than a GridFTP server, the copies will occur between a matrix 2x2 of GridFTP servers ; a bottleneck in the parallelism might occur due to the limited bandwidth available between the 2 data centres more than to the total amount of GridFTP servers involved. It's not compulsory but we recommend to run all the globus-url-copy commands in a screen -L session to avoid to get interrupted the copies just because of a connection cut to the server where you've started them ; anyway it's safe to repeat the same globus-url-copy commands over and over again.
Copying a T3 /pnfs dir into another T3 /pnfs dir ( use case requested by the users just once )
1st of all we'll generate the globus-url-copy commands to be passed as input to GNU parallel ; we'll save them into the file tobecopied ; afterward we'll started them in parallel ; we can arbitrarily choose how many parallel globus-url-copy commands to run by the GNU parallel parameter -j N ; each globus-url-copy command will consume a CPU core on the server on which you're running it so don't set a -j parameter greater than the amount of CPU cores there available :
$ uberftp -ls -r gsiftp://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/cgalloni//RunII/Ntuple_080316/ | grep .root$  | awk {' print "globus-url-copy -v -cd gsiftp://t3se01.psi.ch/"$8" gsiftp://t3se01.psi.ch/"$8}' | sed 's/cgalloni/mdefranc/2' > tobecopied
$ # 10 parallel globus-url-copy 
$ cat tobecopied | parallel -j 10       
Copying a T2 /pnfs dir into a T3 /pnfs dir ( recurring use case )
Because this time the source site is different from the destination site we can increase the GNU parallel parameter from -j 10 to, for instance, -j 30 ; for a copy from a T1/T2 to a T2 you might set -j 50 ; regrettably it's impossible for an ordinary user to compute the correct -j ; again you might want to start the copies by a screen -L session, but it's not compulsory.
$ uberftp -ls -r gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/cgalloni/Ntuple_290216/WJetsToQQ_HT-600ToInf_TuneCUETP8M1_13TeV-madgraphMLM-pythia8/ | grep .root$  | awk {' print "globus-url-copy -v -cd gsiftp://storage01.lcg.cscs.ch//"$8" gsiftp://t3se01.psi.ch/"$8}' | sed 's/cgalloni/mdefranc/2' | sed 's/lcg.cscs.ch/psi.ch/3' > tobecopied
$ # 30 parallel globus-url-copy 
$ cat tobecopied | parallel -j 30

Getting data from remote SEs to the T3 SE

Official datasets

For official data sets/blocks that are registered in CMS DBS you must use the PhEDEx system.

Private datasets

The recommended way to transfer private datasets (non-CMSSW ntuples) between sites is the File Transfer System (FTS) documentation. In a nuthsell, you need to prepare a file that contains lines like protocol://source protocol://destination.

An example file is the following:

gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_973.root
gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/jpata/tth/Sep29_v1/ttHTobb_M125_13TeV_powheg_pythia8/Sep29_v1/161003_055207/0000/tree_981.root

Then, you can submit the transfer with

$ fts-transfer-submit -s https://fts3-pilot.cern.ch:8446 -f files.txt
a360e11e-ab3b-11e6-8fe7-02163e00a39b

You will get back an ID string, which you can use to monitor your transfer on the site https://fts3-pilot.cern.ch:8449/fts3/ftsmon/#

The transfer will proceed in parallel, you can also specify resubmission options for failed jobs.

The site prefixes can typically be found out from existing transfer logs on the grid or by inspecting the files in

/cvmfs/cms.cern.ch/SITECONF/T2_CH_CSCS/JobConfig/site-local-config.xml
/cvmfs/cms.cern.ch/SITECONF/T2_CH_CSCS/PhEDEx/storage.xml

Some useful examples are below:

gsiftp://storage01.lcg.cscs.ch//pnfs/lcg.cscs.ch/cms/trivcat
gsiftp://t3se01.psi.ch//pnfs/psi.ch/cms/trivcat
gsiftp://eoscmsftp.cern.ch//eos/cms/

Job Stageout from other remote sites

You can try to stageout your CRAB3 Job outputs directly a T3_CH_PSI but if these transfers will get too slow and/or unreliable than stageout first at T2_CH_CSCS and afterwards copy your files at T3_CH_PSI by the Leonardo Sala's data_replica.py or whatever else Grid tool able to transfers files in parallel between 2 sites.

/pnfs dirs and files permissions

At the time you requested a T3 account you've provided your X509 DN, namely a string like: /DC=ch/DC=cern/OU=Organic Units/OU=Users/CN=accountname/CN=706134/CN=Name Surname that you can always retrieve by running on a UI :
$ voms-proxy-info | grep identity
identity  : /DC=com/DC=quovadisglobal/DC=grid/DC=switch/DC=users/C=CH/O=Paul-Scherrer-Institut (PSI)/CN=Fabio Martinelli

we've also created your SE dir /pnfs/psi.ch/cms/trivcat/store/user/accountname and granted the write permission just to you; this permission prevents the other users from deleting your files. By default even your group members won't be able to alter your own SE dir like shown in the following example :

$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username
# owner : 2980
owner:2980:RWX    <---- 2980 is the UID
user:2980:RWX
group:500:RX   <---- no group write ; 500 is the GID
other:RX        

the group write permission is switched on for the group dirs instead :

$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/b-physics
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/b-physics
# owner : 501
owner:501:RWX
user:501:RWX
group:500:RWX  <---- each member of the group can upload and delete files; they can also create new subdirs
other:RX         

if you need to create a /pnfs dir where also the group members can write and delete files you can proceed in this way:

$ srmmkdir srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# owner : 2980
owner:2980:RWX
user:2980:RWX
group:500:RX   <----no group write, yet
other:RX

$ srm-set-permissions -type=ADD -group=RWX srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR

$ srm-get-permissions srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# file  : srm://t3se01.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/username/TESTDIR
# owner : 2980
owner:2980:RWX
user:2980:RWX
group:500:RWX   <---- now your group members can write files and dirs  
other:RX

For obvious security reasons no user can create or remove dirs and files inside the main user dir /pnfs/psi.ch/cms/trivcat/store/user ; e.g. this srmmkdir command correctly fails :

$ srmmkdir srm://t3se01//pnfs/psi.ch/cms/trivcat/store/user/TESTDIR
Return code: SRM_AUTHORIZATION_FAILURE
Explanation: srm://t3se01//pnfs/psi.ch/cms/trivcat/store/user/TESTDIR : Permission denied

/pnfs dirs cleanup

T3_CH_PSI

Each T3 user must to remove his/her old dir from /pnfs ; in order to quickly select and delete the unnecessary dirs, login on a UI ( it's crucial for the correct $USER resolution ) and prepare the file /scratch/$USER/recursive.rm.pnfs :
# Extract your /pnfs dirs and save them in /scratch/$USER/recursive.rm.pnfs
$ curl http://t3mon.psi.ch/PSIT3-custom/v_pnfs_top_dirs.txt 2> /dev/null | egrep $USER | awk {' print "uberftp t3se01.psi.ch \047rm -r "$15"\047"}' > /scratch/$USER/recursive.rm.pnfs

# Erase in /scratch/$USER/recursive.rm.pnfs all the /pnfs dirs that you want to PRESERVE !! All the remaining /pnfs dirs will be recursively DELETED !! There are NO backups for /pnfs files !!
$ vim /scratch/$USER/recursive.rm.pnfs

# Run the recursive deletions ; a CMS proxy is needed 
$ source /scratch/$USER/recursive.rm.pnfs
You might start with a small fragment of /scratch/$USER/recursive.rm.pnfs and check how it behaves before to run a big deletions campaign.

T2_CH_CSCS

Regularly monitor your disk usage

curl http://ganglia.lcg.cscs.ch/ganglia/files_cms.html | grep $USER
or using the T2_Storage.py notebook/python script.

In order to clean up, use uberftp as explained above.

Understanding of dCache for advanced user

Topic attachments
I Attachment History Action Size Date Who Comment
Texttxt T2_Storage.py.txt r1 manage 1.6 K 2017-01-30 - 10:49 JoosepPata  
Edit | Attach | Watch | Print version | History: r144 | r137 < r136 < r135 < r134 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r135 - 2018-11-21 - NinaLoktionova
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback