Tags:
create new tag
view all tags

Backlinks to CMSAdminReaderGroup in all Webs (Search Main Web only)

Results from Main web retrieved at 08:43 (GMT)

Statistics for Main Web Month: Topic views: Topic saves: File uploads: Most popular topic views: Top contributors for topic save and...
Number of topics: 1

Results from CmsTier3 web retrieved at 08:43 (GMT)

Areca Support WebGUI usage steps to allow web access: # /root/Linuxhttp V2.5.1 180529/x86 64/archttp64 ArchHTTP for setup: http://hostname:81/ or directly the...
FW updates on DALCO JBOD servers SN F 18.05.101 104 and F 18.11.176 This is the latest firmware package: https://downloadcenter.intel.com/de/download/28696/Intel...
OBSOLETE INFORMATION basic understanding of the dCache for advanced user (archived pages) The Storage Element ( SE ) t3se01.psi.ch runs dCache, a Grid Storage...
red { background color:#DD2233; } blue { background color:#4433FF; } green { background color:#22AA11; } black { color:#000000; font weight:bold; } section { color...
Analysis of the `Ntuples` This page describes the setup and some tools for analyzing the `ntuples`. Quick and dirty plots directly from the ntuple setenv LD LIBRARY...
Under construction How to run on data with crab Setting up the environment Check out some recent CMSSW version Setup the CRAB environment. Using c shell...
Documentation on CMSSW Tested Versions Setup a developers release area to work, just to get you started on t3ui02.psi.ch: cmsrel cd /src cmsenv cvs co r...
Table with Candidates in the NTuple Plugin Bplus to J/Psi Kplus Candidate ID Daughters Fit package Description 100521 Kalman Bplus...
Table with chain files MC chain files Name Chain file configuration files Chain Events Processed Events Lumi B0 mumu...
Introduction The production is not based on crab, but on 1 3 scripts. It can be run either on the grid or on the T3 batch system. The output is (by default) stored...
Mass production of ntuples on data and (min bias) MC Release The working releases will change over time. The following have been tested to work. CMSSW 3...
Howto look at files in Manno srmls srm://storage01.lcg.cscs.ch:8443/srm/managerv2\?SFN /pnfs/lcg.cscs.ch/cms/trivcat/store/user/ursl/production/Winter10/...
CPU Example #!/bin/bash # #SBATCH p wn #SBATCH account t3 #SBATCH job name job name #SBATCH mem 3000M # memory 3GB (per job) #SBATCH time...
Submitting Multicore/Multithreading job using multiple processors/threads on a single physical computer (SMP parallel job) Your program might require a number of...
CERNLIB installation Get CERNLIB from http://cernlib.web.cern.ch/cernlib/ There exist binary tarballs and primitve RPMS (not based on SRPMs that compile, they just...
CRAB SGE Toubleshooting CARB crashed when querying status of glite jobs When I was trying to get the status of glite jobs from CRAB, I got following error: chen...
Custom YUM repositories for the Tier3 Introduction PSI runs a customized version of Linux some local RPMs repositories ( e.g. including IBM GPFS ) ; the T3 uses...
T3 CH PSI and T2 CH CSCS data sets to be cleaned up Datasets cleaned on Sep 14th, we got back ~40TB between T3 and T2 Please indicate in the leading Keep column...
dCache configurator We use a script create PoolConf.pl which generates most of the basic configuration for the Pools. The script can be found under https://svn.cscs...
dCache Upgrade Problem 1.8.12 to 1.9.5 gridFTP transfers work, but on every SRM interaction I find in the /opt/d cache/libexec/apache tomcat 5.5.20/logs/catalina...
dCache documentation snippets from mails, etc. Good external links to documentation and tools FAQ on the official trac wiki notes on dCache ItalianT2Tools...
Pool Migration External documentation dcache book: http://www.dcache.org/manuals/Book/cookbook/cb pool migration.shtml Example for copying pool t3fs05 cms to...
Comparison of I/O Performance on /t3home, /work and /eos with simple dd benchmark measure of write performance to eos 7.9 MB/s dd if /dev/zero of /eos/home n...
Disk quota on /tmp and /scratch Introduction Please have a look to our usage policies. Note that on t3ui08 Urs wanted /scratch thresholds 98% 99% Puppet module...
Interacting with the ELOM/ILOM ELOM is the older version of the service processor`s firmware while ILOM is the name of the current versions. reference sheet by...
Links to external documentation on the Tier 3 Hardware Blade 6000 Modular System http://docs.sun.com/app/docs/prod/blade.x6270?l en a view Sun Blade...
MAY 2013 To get the Solaris broken disks statistics run: root@t3nagios ~ # /usr/local/bin/disks failure statistics.sh Disk problems on the Thumper/Thor Fileservers...
Physics Group Form Name Type Size Values Tooltip message Attributes Name text 50 official CMS group name M Responsible User...
GPU Example #!/bin/bash # #SBATCH job name test job #SBATCH account gpu gres # to access gpu resources #SBATCH partition gpu...
Slurm GPU Running Jobs Slurm GPU Running vs Pending Jobs Slurm QGPU (quick GPU Partition) Running Jobs Slurm QGPU (quick GPU Partition) Running vs Pending Jobs...
Collection of useful information about git ToDo for everyone (working on analysis code) 1 create an account on GitHub 1 map your GitHub account with your CERN...
Signup procedure to GOC as a ROC DECH member NOTE: THIS IS OBSOLETE INFORMATION WHICH IS MAINLY KEPT FOR HISTORICAL REASONS. EGI replaces most of these addresses...
Grid Host Certificate instruction T3 Admin access registration on Certification Service Provider QuoVadis: done for the following common T3 address...
HP ProLiant DL380 G7 iLO3 Intro The 2 HP DL380 G7 servers t3fs 13,14 can be managed by the specific HP Service Processor called iLO3 that basically is a dedicated...
E mails for HW alerts To properly get the Sun ILOM HW e mail alerts we had to configure each ILOM service vs the PSI e mail GW 192.33.120.33 psquad.psi.ch and the...
Hardware Care Hardware Serial and Warranties Storage Dalco JBOD Serves Support NinaLoktionova 2019 05 08
How to run the HEP SPEC2006 CPU benchmark Introduction How To by HEPIX To get an objective measure of the CPU power of a WLCG site the Hepix community has created...
How to work with Storage Element SE clients Storage data (based on dCache) located under directory /pnfs/psi.ch/cms/trivcat/store/user/username . Data are...
How to interactively debug jobs on a worker node Important note The queue debug.q allows to debug the running jobs on the worker nodes. Please do not abuse this...
HowTo use EOS from the PSI Tier 3 Currently the local mounting of EOS has problems. But we can use the command line eos utility For authentication EOS accepts either...
How to apply for an account at the CMS PSI Tier 3 To apply for a Tier 3 account please send an email to cms tier3@lists.psi.ch with Subject `account request`....
Title From the old glite 3.1 (?) node /opt/glite/etc/vomses `vo.nedm.cyfronet.pl` `voms.cyf kr.edu.pl` `15007` `/C PL/O GRID/O Cyfronet/CN voms.cyf kr.edu.pl` `vo...
How to run local Tier 3 jobs by CRAB2 If you are running on a dataset, it must be stored locally at PSI Tier 3. CRAB3 is the new CRAB !!! Be aware that CRAB2 isn...
How to manage jobs with SGE Utilities SGE provides many command line utilities and a GUI program to Interact With the Sun Grid Engine Software. For the information...
CMS data ordering by Rucio CMS is now using Rucio for the data management across sites. Even though users can submit own requests by Rucio, we currently need to duplicate...
How to retrieve your corrupted or deleted /t3home and /work files Every user can retrieve from snapshots corrupted or deleted files from /t3home or /work...
How to access, set up, and test your account Mailing lists and communication with admins and other users cms tier3 users@lists.psi.ch : list through which we...
Shutting down the Tier 3 Before Downtime 1. do announcement in : t3 user list: cms tier3 users #64;lists.psi.ch t3 admin wiki COGDB...
Basic job submission and monitoring on the Tier 3 The Tier 3 cluster is running the Sun Grid Engine batch queueing system. You can submit any shell script to the...
How to work in a CMS environment Setting up the CMS environment by /cvmfs /cvmfs is an IT technology developed at CERN to easily distribute the CMSSW releases around...
ILOM upgrade problem of X4150 machines 2009 05 27 Upgrade from Also refer to the last update (FirmwareBiosUpdate) SP firmware 2.0.2.6 SP firmware build number:...
Informations on X4500 and X4540 Solaris X4540 setup use of the flash card Interesting discussion where it is argumented that by using ZFS on the flash card...
Solaris UFS partitioning and RAID mirroring for the X4500 My first try at getting a Solaris partitioning (most of this is taken from an howto by www.gravity.phy.syr...
Some help on IPMI commands External documentation blog article by Ben Rockwood Sensor readings speeding up commands by first recording status to a cache Commands...
Dalco Storage servers with JBOD There are two raid controllers installed on t3fs07 10 servers Areca ARC 1883 combined 24 storage drives and MegaRAID...
LSI RAID1 status To monitor the RAID1 status on the servers t3admin01, t3ce01 and t3se01 I installed the utlity mpt status, manually loaded the driver `mptctl`, updated...
Trust/Link from QuoVadis CA You are invited to use QuoVadis Trust/Link as a Subscriber to issue Grid Host certificates for the ScITS UniBe Account in the following...
Introduction We are exploiting the official PSI backup system `Legato` by installing and configuring the Linux client like WELL described in this How To. Please...
Issue Tracker Obsolete Items: Note IssueForm IssueTemplate DerekFeichtinger 2009 11 19
All News DerekFeichtinger 12 May 2009
MegaRAID Support To check RAID1 raid controller of the two OS SSDs on t3fs07 11 use StorCLI Standalone Utility: https://downloadcenter.intel.com/download/27654/StorCLI...
Monitoring Storage Plots Slurm Utilisation Links to PhEDEx monitoring Link to Admin Monitoring
List of Monitorings Nagios Notifications on Icinga Server (for rhel7 nodes) Ganglia Report dCache State: interface Interface...
SGI IS5500 / NetApp E5400 HW issues 26 01 2012 Drawer 2 `Lost of redundancy` event We`ve installed the new SGI/NetApp system and suddenly we got an error on the...
Solutions to Operational Issues daily check of slurm Even though many problems are signalled through mails by Icinga, I also prefer to have a look each morning on...
Networking and File Transfers ( PhEDEx) Links: usage at T3 usage at T2 day of FTS3 jobs induced by CRAB3, transfers details day of FTS3 jobs induced...
PhEDEx PhEDEx is the data transfer service used by CMS to transfer data sets beween sites. Central Monitoring Plots for last 24h of Agents as seen by the...
Physic Group page for the group Links to subgroup pages Bs2MuMu
Physic Group page for the: group
Physic Group page for the: group
Physic Group page for the: group
Physic Group page for the: group
Physic Group page for the: group
Physic Group page for the: group
PSI Tier 3 Physics Groups Overview Obsolete information, needs update Note: Each group has a home page on this wiki that can be reached by the link in the Name...
2019/Q1 plan 1. new boot/pxe server done 1. home migration deadline Feb 2019 1. slurm (for GPU resources, rhel7) 1. 2d GPU procurement/deployment...
Remote Repository Backup This service is only running in beta mode at the moment and does not yet have production level monitoring set up. Purpose Since the shutdown...
Remote Repository Backup Admin Software The software consists of custom made python scripts that can be found at https://www.github.com/daniel meister/rrbackup...
Installing ROOT ROOT installations are located in /swshare/ROOT , which also contains the init files linked to the latest installed version. Installations are performed...
SGE 6.1 Interactive Queue on t3ce01 It`s useful introduce an queue in the t3ce01 SGE configuration for 2 main purposes; to allow users to develop SW exploiting...
16th July 2013 OUTDATED, BUT KEPT FOR REFERENCE This document describes the experiences we made during the upgrade of the SGE installation from 6.1 to 6.2u5, the...
Batch system policies These policies were discussed and endorsed in the steering board meeting of 2011 11 03 Aims The T3 CPUs and Storage resources must be shared...
Searching into /pnfs by dc find To search into /pnfs the T3 users can use dc find with the limitation that the information retrieved will be in the worst case...
Sherpa Integration Project Technical Documentation Introduction This page documents the use of the T3 for the SherpaNLO validation and integration project. This...
Title DerekFeichtinger 2009 11 18
Useful Slurm commands Overview command description sinfo monitor nodes and partitions queue information; check more info options by sinfo...
Slurm Batch system usage Simple job submission and the concept of queues (partitions) On the Tier 3 you run jobs by submitting them to the Slurm job scheduler. The...
Slurm Utilisation useful commands to Monitor Slurm Plots of Slurm Partitions Usage WN and QUICK GPU and QGPU Load of WNs on t3ganglia...
Fourth PSI Tier 3 Steering board meeting Venue Date and time: Monday, Feb 11, 14 16h (Page) Location: ETH Hoenggerberg (HPK D32) Slides: by Fabio Martinelli Meeting...
6th T3 Steering board pre meeting ( Virtual ) Followup SteerBoardMeeting07 Meeting Facts When : on 2015 10 27 at 15:00 16:00 ) as agreed by http://doodle...
2015 12 16 Sixth Steering Board Meeting Preparatory information can be found in the documentation and minutes of the virtual preparation meeting for Steering...
8th Tier 3 Steering board meeting Venue When: Friday, 12th May 2017, 14h 16h (can prolong till 17h if needed) Location: ETH H...
9th Tier 3 Steering Board meeting Venue When: Friday, 7th Dec 2018, 10:30 13:30 Location: ETH H...
Deployment of Storage Accounting EGI APEL dCache accounting done (and puppetized ) on t3se01 according to Quick check of publishing status: http...
T3 Groups Purpose For multiple user to be able to collaborate on an analysis we offer the possibility to create dedicated analysis group folders on the SE in /store...
PSI Tier 3 phase B upgrade slides for ETH meeting on 2010 01 27 (Derek) DerekFeichtinger 2010 01 27
May 20 Security updates/measures set `nosuid` flag for shared file systems mount points (/t3home, /work and /pg backup) EGI Trust Anchor release...
T3 meeting with ETHZ CMS group NinaLoktionova 2019 12 16
CMS T3 CH PSI User Meeting The meeting will take place on Mon, Sep 14 2009, approx. 9:30h An EVO Room in the CMS community will be reserved for external listeners...
Physic Group page for the: group
Monitoring OBSOLETE This page has been dismantled I leave it active for the moment since it contains some links that still may be interesting Networking and...
Policies for the resource allocation on the PSI Tier 3 These policies were agreed upon in the first and second Steering Board meetings. They need revision in...
Understanding Tier 3 storage The Tier 3 offers different kinds of storage and it is important that you understand how to use them to their best advantage. User home...
Complete Upgrade planning and Upgrade logging history Note: The Upgrade planning twiki functionality is based on the UpgradePlanningTemplate and UpgradePlanningForm...
Working with graphical applications with the NX client application Introduction The NX server/client suite enables you to very efficiently work over the network with...
Slurm WN Partition: Running Jobs Slurm WN Partition: Running vs Waiting Jobs Slurm QUICK Partition: Running Jobs Slurm QUICK Partition: Running vs Waiting...
ZFS snapshots backup configuration OBSOLETE Note by Derek at cluster takeover: This page contains obsolete information, since for the last months...
Number of topics: 113

 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback