Mass production of ntuples on data and (min bias) MC
Datasets
Determine the dataset (for a given run) that you want to run over:
- Note:
- dbs is available after cmsenv
- More details on datasets and skims and versions can be obtained from Collisions2010Analysis
dbs search --query='find dataset where run=132440'
List of files and events for each dataset
- Note:
- You MUST do this on t3ui01, because dbs on t3ui02 does NOT know about still open blocks...
dbs search --query='find file,file.numevents where dataset=/MinimumBias/Commissioning10-Apr1Skim_GOODCOLL-v1/RAW-RECO and run<=132513 and site=t3se01.psi.ch and file.numevents>9' > file1
dbs search --query='find file,file.numevents where dataset=/MinimumBias/Commissioning10-GOODCOLL-v8/RAW-RECO and run>132513 and site=t3se01.psi.ch and file.numevents>9' > file2
Create py files for n events, split into runs
- Note:
- mkPyFiles is in Bs2MuMu/perl
mkPyFiles -f file1 -t dana-XXXX.py -r -e 200000 -s v06
mkPyFiles -f file2 -t dana-XXXX.py -r -e 200000 -s v06
Create a tar file
This tarfile contains all necessary source code (for grid and/or batch submision)
Run the jobs
- Note:
- run is in Bs2MuMu/perl
- Of course, the relative paths need to be adapted to where you are running the production
- Make sure that the directory referred to by STORAGE1 exists.
- You can use (at your own risk) the perl skripts monSge (and monGrid) to monitor the progress of the jobs
run -t ../../../../../../grid-100426.tar.gz -m batch -c ../../../test-local/ana.csh \
-r 'STORAGE1 srm://t3se01.psi.ch:8443/srm/managerv2\?SFN=/pnfs/psi.ch/cms/trivcat/store/user/ursl/root/dana/v06-data' \
dana-v06-1*.py
--
UrsLangenegger - 2010-04-26