Tags:
create new tag
view all tags
!-- keep this as a security measure:
  • Set ALLOWTOPICCHANGE = TWikiAdminGroup,Main.LCGAdminGroup
  • Set ALLOWTOPICRENAME = TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = TWikiAdminGroup,Main.LCGAdminGroup
-->

phoenix_services_future.png

Specifications, standard Benchmark values

CPU

HS06 = HEP-SPEC06 benchmark value (see details in HEP-SPEC06 webpage) (Note: 1 HS06 = 1 GFlop, approximately)

Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.



CSCS Tier-2 CPU counts and performance benchmarks
Nodes Description Processors Cores/node Total cores HS06/core Total HS06 RAM/core
62 2 * Intel Xeon E5-2670, 2.6 GHz 2 32 1984 10.4 20633 2 GB
40 2 * Intel Xeon E5-2680 v2 @ 2.8 GHz 2 40 1600 11.1 17760 3.2 GB
40 2 * Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz 2 56 2240 12.01 26902 2 GB
  Total     5824   65295  


CSCS Tier-2 LCGonCray CPU counts and performance benchmarks
Nodes Description Processors Cores/node Total cores HS06/core Total HS06 RAM/core
57 Xeon E5-2695 v4 @ 2.10GHz   68 3876 12.96 50232 2 GB
  Total     3876   50232  
Intel® Xeon E5-2695 v4 @ 2.10GHz

These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage):

  • ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
  • CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue).
  • LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue).

Last update: August 2018

Storage infrastructure

Central Storage - dCache

The dCache services are provided by 18 servers (16 "Storage Elements" and 2 "Head Nodes")
Storage Elements are connected to the storage systems through the CSCS SAN

Total space available on dCache is 4.264 PB and is distributed in 40% - 40% - 20% (ATLAS/CMS/LHCb) fashion.
Additional 1 PB is available temporary for CMS and ATLAS for Tier0 purposes

Last update: August 2018

Scratch - Spectrum Scale

The 'scratch' filesystem is mounted by the worker nodes and is provided by a 16 nodes Spectrum Scale cluster.

Metedata disks are located on Flash Storage (over SAN)
A tier1 Data pool (SSD R/W cache) of 90TB is available to speedup massive I/O operations.
A tier2 Data pool is provided to store less frequently accessed data. Available space is 641 TB

Last update: August 2018

Network connectivity

External network

CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability.

Internal network

Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network.

Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode.

Cluster Management Nodes

Most of the service nodes are virtualized. To see a complete list please visit the FabricInventory#Virtual_Machines

The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and CernVMFS

Individual services are described in ServiceInformation

EMI early adopters status

At the moment, CSCS is an early adopter of the following components:

  • CREAM CE
  • APEL
  • WN
  • SLURM
Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng Phoenix_LCG_Services_Nov_2013.png r1 manage 302.7 K 2013-11-17 - 23:08 MiguelGila  
JPEGjpg Phoenix_LCG_Services_Phase_D.jpg r2 r1 manage 90.9 K 2011-05-25 - 07:45 PabloFernandez  
JPEGjpg Phoenix_LCG_Services_Phase_E.jpg r1 manage 79.8 K 2011-12-09 - 12:58 PabloFernandez  
JPEGjpg Phoenix_LCG_Services_Phase_E_-_after_move.jpg r1 manage 85.1 K 2012-05-29 - 09:04 PabloFernandez  
JPEGjpg Phoenix_LCG_Services_Phase_F.jpg r1 manage 85.2 K 2012-08-21 - 08:13 PabloFernandez  
JPEGjpg Phoenix_LCG_Services_Phase_G.jpg r1 manage 87.8 K 2013-02-21 - 10:37 PabloFernandez  
Unknown file formatpptx Phoenix_Training_session.pptx r1 manage 45.7 K 2011-05-30 - 13:35 PabloFernandez  
PNGpng phoenix_services.png r1 manage 258.6 K 2017-12-19 - 08:37 DinoConciatore Phoenix_LCG_Services_Nov_2017
PNGpng phoenix_services_future.png r1 manage 284.2 K 2018-02-05 - 15:35 DinoConciatore Phoenix_LCG_Services_FEB_2018
PNGpng phoenix_services_notit.png r1 manage 204.9 K 2016-09-21 - 12:33 DinoConciatore Phoenix Services 2016
Edit | Attach | Watch | Print version | History: r54 < r53 < r52 < r51 < r50 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r54 - 2018-08-23 - DarioPetrusic
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2018 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback