Tags:
tag this topic
create new tag
view all tags
<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+!! How to run local Tier-3 jobs by CRAB2 %TOC% *If you are running on a dataset, it must be stored locally at PSI Tier-3.* ---+++ CRAB3 is the new CRAB !!! Be aware that CRAB2 isn't supported anymore by CMS; T3 users should start to use CRAB3 instead https://twiki.cern.ch/twiki/bin/view/CMSPublic/WorkBookCRAB3Tutorial ; *right now it is not possible to use the T3 CPUs by CRAB3 because we don't run a CE* :( Maybe there will be a solution during the 2016 by https://twiki.cern.ch/twiki/bin/view/CMSPublic/CompOpsLocalSubmission </br> CRAB2 at T3 will be left installed *as it is* until it won't prevent the operations. ---+++ Changes to crab.cfg The changes the user needs to make are following: In the crab.cfg file, please change the line in [CRAB] section <pre> scheduler = sge </pre> ---+++ Copy the output to SE You should use the following directives to copy your output files to SE: Add following lines in [USER] section of your crab.cfg <pre> copy_data = 1 storage_element = t3se01.psi.ch storage_path = /srm/managerv2?SFN=/pnfs/psi.ch/cms/trivcat/store/user/[YOURNAME] user_remote_dir = [/SUBDIRECTORY] #copyCommand = srmcp </pre> You must manage your own proxy that srmcp uses, so before submitting jobs type <pre> voms-proxy-init -voms cms </pre> Your proxy will be located at ~/.x509up_<uid> automatically. ---+++ SGE options You can specify which queue CRAB should submit to and the resource value CRAB should use during jobs submitting. To do this, you should add a new section [SGE] in your crab.cfg file. <pre> [SGE] # please always specify the queue name queue = all.q # you may add more resource requirements to the batch system # this requirement asks for 5 GB of allowed memory usage resource = -l h_vmem=5G </pre> If the queue name is not specified, one of the eligible queues on the least loaded node is used by the scheduler. This may result in your job being scheduled on the slow.q (older nodes). It is always better to explicitely specify the queue. ---+++ Run CRAB Setup the CRAB Environment: <pre> source /swshare/CRAB/CRAB_X_Y_Z/crab.sh </pre> Create and Submit jobs with follwoing command <pre> crab -create crab -submit </pre> It may take awhile for them to run; you can monitor their progress with the following command: <pre> crab -status </pre> Finally, when the jobs are done, you can retrieve output with the following: <pre> crab -getoutput </pre> If you would like longer explanations of the above command, check the [[https://twiki.cern.ch/twiki/bin/view/CMS/SWGuideCrab][CRAB user pages]]. ---+++ Example of crab.cfg <pre %FILESTYLE%> [CRAB] # # This section contains the default values for general parameters. # They can be set also as a command-line option, i.e. # # key1 = value1 # [SECTION] # key2 = value2 # # in this section corresponds to # # crab.py -key1=value1 -key2=SECTION.value2 # # in the command line. # jobtype = cmssw ### Specify the scheduler to be used. %BLUE% scheduler = sge %ENDCOLOR% [CMSSW] ### The data you want to access (to be found on DBS) ### /primarydataset/datatier/processeddataset #datasetpath=/ttbar_inclusive_TopRex/CMSSW_1_3_1-Spring07-1122/GEN-SIM-DIGI-RECO #datasetpath=none ### To run CRAB for private events production set datasetPath= None datasetpath=/Bs2MuMuGamma/CMSSW_1_6_7-CSA07-1203847101/RECO ### The name of ParameterSet to be used #pset=pythia.cfg pset=demoanalyzer-classic.cfg ### Splitting parameters: ### Total number of events to be accessed: -1 means all ### NOTE: "-1" is not usable if no input total_number_of_events=4 ### Number of events to be processed per job events_per_job = 2 ### Number of jobs to be created per task #number_of_jobs = 5 ### The output files produced by your application (comma separated list) output_file =Testfile [USER] ### OUTPUT file management ### ### To have back the job executable output into UI set return_data= 1 return_data = 0 ### To copy the CMS executable output into a SE set copy_data = 1 %BLUE% copy_data = 1 %ENDCOLOR% ### if copy_data = 1 ### ### Specify the name of the SE where to copy the CMS executable output. #storage_element = srm.cern.ch ### Specify the SE directory (or the mountpoint) that has to be writable from all #storage_path = /castor/cern.ch/user/u/user ### example for LNL SRM %BLUE% storage_element = t3se01.psi.ch storage_path = /srm/managerv2?SFN=/pnfs/psi.ch/cms/trivcat/store/user/zlchen user_remote_dir = /test #copyCommand = srmcp %ENDCOLOR% ### To publish produced output in a local istance of DBS set publish_data = 1 publish_data=0 ### Specify the dataset name. The full path will be <primarydataset>/<publish_data_name>/USER #publish_data_name = yourDataName ### Specify the URL of DBS istance where CRAB has to publish the output files #dbs_url_for_publication = http://cmssrv17.fnal.gov:8989/DBS108LOC1/servlet/DBSServlet ### To switch from status print on screen to DB serialization to a file specify here the destination files. ### CRAB will create it on CRAB_Working_Dir/share #xml_report= [SGE] # please always specify the queue name queue = all.q [GRID] # ## RB/WMS management: rb = CERN #group=superman ## Black and White Lists management: ## By Storage se_black_list = T0,T1 #se_white_list =T2_FR_GRIF_LLR ## By ComputingElement #ce_black_list = #ce_white_list = #specific GRID requiremens (usually not needed) #requirements = (other.GlueCEUniqueID == "polgrid1.in2p3.fr:2119/jobmanager-pbs-cms") </pre> -- Main.ZhilingChen - 22 Oct 2008
E
dit
|
A
ttach
|
Watch
|
P
rint version
|
H
istory
: r13
<
r12
<
r11
<
r10
<
r9
|
B
acklinks
|
V
iew topic
|
Ra
w
edit
|
M
ore topic actions
Topic revision: r13 - 2016-05-31
-
FabioMartinelli
CmsTier3
Log In
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
User Pages
Main Page
Policies
Monitoring Storage Space
Monitoring Slurm Usage
Physics Groups
Steering Board Meetings
Admin Pages
AdminArea
Cluster Specs
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
CmsTier3 Web
Create New Topic
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Account
Log In
E
dit
A
ttach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback