!-- keep this as a security measure: * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.LCGAdminGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.LCGAdminGroup --> ---+!! <img alt="phoenix_services_future.png" height="990" src="%ATTACHURL%/phoenix_services_future.png" style="background-color: transparent; color: #000000; font-size: small;" title="phoenix_services_future.png" width="1735" /> %TOC% ---+ Specifications, standard Benchmark values ---++ CPU HS06 = HEP-SPEC06 benchmark value (see details in [[http://w3.hepix.org/benchmarks/doku.php/][HEP-SPEC06 webpage]]) (Note: 1 HS06 = 1 GFlop, approximately) %RED%Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.%ENDCOLOR% <br />%STARTSECTION{name="Cpu" type="section"}% <br />%TABLE{caption="CSCS Tier-2 CPU counts and performance benchmarks"}% | *Nodes* | *Description* | *Processors* | *Cores/node* | *Total cores* | *HS06/core* | *Total HS06* | *RAM/core* | | 62 | 2 * Intel Xeon E5-2670, 2.6 GHz | 2 | 32 | 1984 | 10.4 | 20633 | 2 GB | | 40 | 2 * Intel Xeon E5-2680 v2 @ 2.8 GHz | 2 | 40 | 1600 | 11.1 | 17760 | 3.2 GB | | 40 | 2 * Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz | 2 | 56 | 2240 | 12.01 | 26902 | 2 GB | | | Total | | | %CALC{"$SUM($ABOVE())"}% | | %CALC{"$SUM($ABOVE())"}% | | %STARTSECTION{name="Cpu" type="section"}% <br />%TABLE{caption="CSCS Tier-2 LCGonCray CPU counts and performance benchmarks"}% | *Nodes* | *Description* | *Processors* | *Cores/node* | *Total cores* | *HS06/core* | *Total HS06* | *RAM/core* | | 57 | Xeon E5-2695 v4 @ 2.10GHz | | 68 | 3876 | 12.96 | 50232 | 2 GB | | | Total | | | %CALC{"$SUM($ABOVE())"}% | | %CALC{"$SUM($ABOVE())"}% | | Intel® Xeon E5-2695 v4 @ 2.10GHz %ENDSECTION{name="Cpu" type="section"}% These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage): * ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue). _Last update: August 2018_ ---++ Storage infrastructure ---+++ Central Storage - dCache The dCache services are provided by 18 servers (16 "Storage Elements" and 2 "Head Nodes") <br />Storage Elements are connected to the storage systems through the CSCS SAN Total space available on dCache is<b> 4.264 PB</b> and is distributed in <b>40% - 40% - 20% </b>(ATLAS/CMS/LHCb) fashion.<br />Additional 1 PB is available temporary for CMS and ATLAS for Tier0 purposes _Last update: August 2018_ ---+++ Scratch - Spectrum Scale The 'scratch' filesystem is mounted by the worker nodes and is provided by a 16 nodes Spectrum Scale cluster. Metedata disks are located on Flash Storage (over SAN)<br />A tier1 Data pool (SSD R/W cache) of 90TB is available to speedup massive I/O operations.<br />A tier2 Data pool is provided to store less frequently accessed data. Available space is 641 TB _%BLACK%Last update: August 2018%ENDCOLOR%_ ---++ Network connectivity ---+++ External network CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability. ---+++ Internal network Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network. Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode. ---++ Cluster Management Nodes Most of the service nodes are virtualized. To see a complete list please visit the FabricInventory#Virtual_Machines The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and CernVMFS Individual services are described in ServiceInformation ---++ EMI early adopters status At the moment, CSCS is an early adopter of the following components: * CREAM CE * APEL * WN * SLURM
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
png
phoenix_services.png
r1
manage
258.6 K
2017-12-19 - 08:37
DinoConciatore
Phoenix_LCG_Services_Nov_2017
png
phoenix_services_future.png
r1
manage
284.2 K
2018-02-05 - 15:35
DinoConciatore
Phoenix_LCG_Services_FEB_2018
png
phoenix_services_notit.png
r1
manage
204.9 K
2016-09-21 - 12:33
DinoConciatore
Phoenix Services 2016
png
Phoenix_LCG_Services_Nov_2013.png
r1
manage
302.7 K
2013-11-17 - 23:08
MiguelGila
jpg
Phoenix_LCG_Services_Phase_D.jpg
r2
r1
manage
90.9 K
2011-05-25 - 07:45
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E.jpg
r1
manage
79.8 K
2011-12-09 - 12:58
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E_-_after_move.jpg
r1
manage
85.1 K
2012-05-29 - 09:04
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_F.jpg
r1
manage
85.2 K
2012-08-21 - 08:13
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_G.jpg
r1
manage
87.8 K
2013-02-21 - 10:37
PabloFernandez
pptx
Phoenix_Training_session.pptx
r1
manage
45.7 K
2011-05-30 - 13:35
PabloFernandez
This topic: LCGTier2
>
WebHome
>
RoadMap
>
PhoenixCluster
>
PhoenixSetupAndSpecs
Topic revision: r54 - 2018-08-23 - DarioPetrusic
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback