Tags:
create new tag
view all tags

Introduction

The production is not based on crab, but on 1+3 scripts. It can be run either on the grid or on the T3 batch system. The output is (by default) stored to CSCS, and should be replicated manually to PSI.

The production is divided into separate channels, e.g. Bs -> mu mu or Bs -> K mu nu. Each channel is produced in several runs. Different runs NEED different random seed initializations (else identical files are produced).

The production is split into (1) gen+sim+digi step and (2) reco steps. The reco step is fast, the time-consuming part is (1).

Files

An example production setup was called Winter10

  • The `template' py files for the generation step are stored in Winter10/gen and have the runnumber as part of the filename.

  • The `template' py files for the reco step are stored in Winter10/reco and have the runnumber as part of the filename.

  • A PERL script run is used to submit the jobs.

  • Two PERL scripts monGrid and monSge are used to submit the grid and batch jobs, respectively.

  • A PERL script 'replicate' (e.g. in ~ursl/perl/replicate) is used to 'replicate' the py files, replacing the (run)number in the filename and simultaneously inside the file.

Procedure

1) create a working directory for the production (which does not go to CVS)

  cd Winter10    
  mkdir production 
  cd production 

2) run the gen step (assumes tcsh, probably)

  mkdir BsToMuMu 
  cd BsToMuMu 
  cp ../../gen/BsToMuMu_7TeV_GEN_SIM_DIGI_L1_DIGI2RAW_HLT_START-10000.py .  
  replicate -f 10000 -l 10100 -p 10000 -t BsToMuMu_7TeV_GEN_SIM_DIGI_L1_DIGI2RAW_HLT_START-10000.py  
  setenv MODE BsToKK && ../../../../run -c ../../gen.csh -r "STORAGE1 srm://storage01.lcg.cscs.ch:8443/srm/managerv2\?SFN=/pnfs/lcg.cscs.ch/cms/trivcat/store/user/ursl/production/Winter10/${MODE}" -t ~/grid/Winter07/grid-1.tar.gz -m batch ${MODE}_7TeV_GEN_SIM_DIGI_L1_DIGI2RAW_HLT_START-100??.py 

Note that the argument after -m can be 'grid' or 'batch' or 'local' or 'debug' (in which case nothing happens).

3) Wait a while and then check the progress

  monGrid -o 
If something went wrong, and you want to remove the job's broken output from the SE, use
  monGrid -o -r  
If only a specific job should be looked at, use
  monGrid -o -u https://wms218.cern.ch:9000/XeAJqYxM_Yysmx9N5Wve9g 
If you ran in batch, use
  monSge 
instead. All the monitoring scripts assume that the job information is in a file called jobs.list.

4) Do a careful job with bookkeeping which channel has progressed how far and how successful it was.

4) Run the reco step:

  setenv MODE BsToKK && run -c ../../reco.csh -r "STORAGE1 srm://storage01.lcg.cscs.ch:8443/srm/managerv2\?SFN=/pnfs/lcg.cscs.ch/cms/trivcat/store/user/ursl/production/Winter10/${MODE}%INFILE ${MODE}_7TeV_GEN_SIM_DIGI_L1_DIGI2RAW_HLT_START-XXXX.root" -t ~/grid/Winter07/grid-2.tar.gz -m grid ${MODE}_7TeV_RAW2DIGI_RECO_START-100??.py 

-- UrsLangenegger - 2010-03-18

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r2 - 2010-03-18 - UrsLangenegger
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback