!-- keep this as a security measure:
- Set ALLOWTOPICCHANGE = TWikiAdminGroup,Main.LCGAdminGroup
- Set ALLOWTOPICRENAME = TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = TWikiAdminGroup,Main.LCGAdminGroup
-->
Specifications, standard Benchmark values
CPU
HS06 = HEP-SPEC06 benchmark value (see details in
HEP-SPEC06 webpage) (Note: 1 HS06 = 1 GFlop, approximately)
Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.
Intel® Xeon E5-2695 v4 @ 2.10GHz
These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage):
- ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
- CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
- LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue).
Last update: August 2018
Storage infrastructure
Central Storage - dCache
The dCache services are provided by 18 servers (16 "Storage Elements" and 2 "Head Nodes")
Storage Elements are connected to the storage systems through the CSCS SAN
Total space available on dCache is
4.264 PB and is distributed in
40% - 40% - 20% (ATLAS/CMS/LHCb) fashion.
Additional 1 PB is available temporary for CMS and ATLAS for Tier0 purposes
Last update: August 2018
Scratch - Spectrum Scale
The 'scratch' filesystem is mounted by the worker nodes and is provided by a 16 nodes Spectrum Scale cluster.
Metedata disks are located on Flash Storage (over SAN)
A tier1 Data pool (SSD R/W cache) of 90TB is available to speedup massive I/O operations.
A tier2 Data pool is provided to store less frequently accessed data. Available space is 641 TB
Last update: August 2018
Network connectivity
External network
CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability.
Internal network
Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network.
Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode.
Cluster Management Nodes
Most of the service nodes are virtualized. To see a complete list please visit the
FabricInventory#Virtual_Machines
The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and
CernVMFS
Individual services are described in
ServiceInformation
EMI early adopters status
At the moment, CSCS is an early adopter of the following components: