!-- keep this as a security measure:
- Set ALLOWTOPICCHANGE = TWikiAdminGroup,Main.LCGAdminGroup
- Set ALLOWTOPICRENAME = TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = TWikiAdminGroup,Main.LCGAdminGroup
-->
Specifications, standard Benchmark values
CPU
HS06 = HEP-SPEC06 benchmark value (see details in
HEP-SPEC06 webpage) (Note: 1 HS06 = 1 GFlop, approximately)
Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.
Intel® Xeon E5-2695 v4 @ 2.10GHz
These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage):
- ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
- CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
- LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue).
Last update: August 2018
Storage infrastructure
Central Storage - dCache
The dCache services are provided by 12 servers (10 "Storage Elements" and 2 "Head Nodes")
Storage Elements are connected to the storage systems through the CSCS SAN
Total space available on dCache is
4.6 PB and is distributed in
40% - 40% - 20% (ATLAS/CMS/LHCb) fashion.
Last update: November 2019
Scratch - Spectrum Scale
The 'scratch' filesystem is available to Compute nodes through special DVS nodes and is provided by a Spectrum Scale cluster of 16 nodes (4 meta + 12 data).
Metedata is located on Flash Storage
The filesystem is featuring a policy based pool tiering
Tier1: SSD based data pool (90TB) where all files are written
Tier2: HDD based data pool (380TB) where less frequently used data is moved (this mechanism is transparent for the clients)
Last update: November 2019
Network connectivity
External network
CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability.
Internal network
Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network.
Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode.
Cluster Management Nodes
Most of the service nodes are virtualized. To see a complete list please visit the
FabricInventory#Virtual_Machines
The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and
CernVMFS
Individual services are described in
ServiceInformation
EMI early adopters status
At the moment, CSCS is an early adopter of the following components: