Understanding Tier 3 storage The Tier 3 offers different kinds of storage and it is important that you understand how to use them to their best advantage. User home...
Policies for the resource allocation on the PSI Tier 3 These policies were agreed upon in the first and second Steering Board meetings. They need revision in...
How to access, set up, and test your account Mailing lists and communication with admins and other users cms tier3 users@lists.psi.ch : list through which we...
How to apply for an account at the CMS PSI Tier 3 To apply for a Tier 3 account please send an email to cms tier3@lists.psi.ch with Subject `account request`....
This page is an up to date version of ObtainingHostCertificates Obtaining host certificates for Grid servers Certificates now agaim come from QuoVadis. Their services...
Trust/Link from QuoVadis CA You are invited to use QuoVadis Trust/Link as a Subscriber to issue Grid Host certificates for the ScITS UniBe Account in the following...
Monitoring OBSOLETE This page has been dismantled I leave it active for the moment since it contains some links that still may be interesting Networking and...
HowTo EOS Currently CERN EOS mounted on User Interface machines t3ui01 03: $ ls /eos home a home c home e home g home i home k home m home o home q home s...
Slurm Batch system usage This is introduction to T3 Slurm configuration a modern job scheduler for Linux clusters. Please use User Interface Nodes t3ui01 03 mostly...
Areca Support WebGUI usage steps to allow web access: # /root/Linuxhttp V2.5.1 180529/x86 64/archttp64 ArchHTTP for setup: http://hostname:81/ or directly the...
Slurm WN Partition: Running Jobs Slurm WN Partition: Running vs Waiting Jobs Slurm QUICK Partition: Running Jobs Slurm QUICK Partition: Running vs Waiting...
Networking and File Transfers ( PhEDEx) Links: usage at T3 usage at T2 day of FTS3 jobs induced by CRAB3, transfers details day of FTS3 jobs induced...
Comparison of I/O Performance on /t3home, /work and /eos with simple dd benchmark measure of write performance to eos 7.9 MB/s dd if /dev/zero of /eos/home n...
PSI Tier 3 Physics Groups Overview Obsolete information, needs update Note: Each group has a home page on this wiki that can be reached by the link in the Name...
CISCO Extender mapping (since 2018). All ports are configured to 192.33.123.0/24 To activate port one needs create SNOW incident in Home PSI Service Catalog...
Submitting Multicore/Multithreading job using multiple processors/threads on a single physical computer (SMP parallel job) Your program might require a number of...
Basic job submission and monitoring on the Tier 3 The Tier 3 cluster is running the Sun Grid Engine batch queueing system. You can submit any shell script to the...
How to retrieve your corrupted or deleted /t3home and /work files Every user can retrieve from snapshots corrupted or deleted files from /t3home or /work...
FW updates on DALCO JBOD servers SN F 18.05.101 104 and F 18.11.176 This is the latest firmware package: https://downloadcenter.intel.com/de/download/28696/Intel...
Dalco Storage servers with JBOD There are two raid controllers installed on t3fs07 10 servers Areca ARC 1883 combined 24 storage drives and MegaRAID...
Batch system policies These policies were discussed and endorsed in the steering board meeting of 2011 11 03 Aims The T3 CPUs and Storage resources must be shared...
How to work with Storage Element SE clients Storage data (based on dCache) located under directory /pnfs/psi.ch/cms/trivcat/store/user/username . Data are...