Tags:
view all tags
!-- keep this as a security measure: * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.LCGAdminGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.LCGAdminGroup --> ---+!! <img alt="phoenix_services_future.png" height="990" src="%ATTACHURL%/phoenix_services_future.png" style="background-color: transparent; color: #000000; font-size: small;" title="phoenix_services_future.png" width="1735" /> %TOC% ---+ Specifications, standard Benchmark values ---++ CPU HS06 = HEP-SPEC06 benchmark value (see details in [[http://w3.hepix.org/benchmarks/doku.php/][HEP-SPEC06 webpage]]) (Note: 1 HS06 = 1 GFlop, approximately) %RED%Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.%ENDCOLOR% <br />%STARTSECTION{name="Cpu" type="section"}% <br />%TABLE{caption="CSCS Tier-2 CPU counts and performance benchmarks"}% | *Nodes* | *Description* | *Processors* | *Cores/node* | *Total cores* | *HS06/core* | *Total HS06* | *RAM/core* | | 64 | 2 * Intel Xeon E5-2670, 2.6 GHz | 2 | 32 | 2048 | 10.4 | 21299 | 2 GB | | 1 | 2 * Intel Xeon E5-2690, 2.9 GHz | 2 | 32 | 32 | 11.2 | 358 | 2 GB | | 48 | 2 * Intel Xeon E5-2680 v2 @ 2.8 GHz | 2 | 40 | 1920 | 11.1 | 21312 | 3.2 GB | | 40 | 2 * Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz | 2 | 56 | 2240 | 12.01 | 26902 | 2 GB | | | Total | | | %CALC{"$SUM($ABOVE())"}% | | %CALC{"$SUM($ABOVE())"}% | | %STARTSECTION{name="Cpu" type="section"}% <br />%TABLE{caption="CSCS Tier-2 LCGonCray CPU counts and performance benchmarks"}% | *Nodes* | *Description* | *Processors* | *Cores/node* | *Total cores* | *HS06/core* | *Total HS06* | *RAM/core* | | 25 | Xeon E5-2695 v4 @ 2.10GHz | | 64 | 1600 | 12.96 | 20736 | 2 GB | | | Total | | | %CALC{"$SUM($ABOVE())"}% | | %CALC{"$SUM($ABOVE())"}% | | Intel® Xeon E5-2695 v4 @ 2.10GHz %ENDSECTION{name="Cpu" type="section"}% %RED%Last update: August 2017%ENDCOLOR% These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage): * ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue). ---++ Storage infrastructure ---+++ Central storage - dCache The dCache services are provided by 18 servers connected to the storage systems (details in the table below) Please note that the INSTALLED and PLEDGED capacity are equal. | *Count* | *System* | *Disks* | *Size* | *Protection* | *Usable Space* | | 8 | IBM DCS3700 | 480 | 3TB | RAID6 | 1'046.4 TiB | | 8 | NETAPP E5500 | 480 | 4TB | RAID6 | 1'396.8 TiB | | 2 | NETAPP E5600 | 120 | 6TB | RAID6 | 429.0 TiB | | 4 | DDN SFA12K | 240 | 3TB | RAID6 | 528.0 TiB | | | | *1320* | | | *3'400.2 TiB* | <span style="color: red;">Last update: June 2016</span> ---+++ Scratch - GPFS The 'scratch' filesystem is mounted by the worker nodes and is provided by a GPFS (IBM Spectrum Scale) cluster of 8 nodes: 4 metdadata servers and 4 data servers. The metedata disks are local SSDs directly attached to the metadata servers.<br />Data redundancy is granted by the GPFS failure group mechanism<br />Total metadata space is 1.75 TB The data disks are provided by a NETAPP E-Series system with 120x 900GB SAS 10K-RPM drives configured in RAID6<br />Total data space is 84.30 TB %RED%Last update: June 2016%ENDCOLOR% ---++ Network connectivity ---+++ External network CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability. ---+++ Internal network Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network. Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode. ---++ Cluster Management Nodes Most of the service nodes are virtualized. To see a complete list please visit the FabricInventory#Virtual_Machines The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and CernVMFS Individual services are described in ServiceInformation ---++ EMI early adopters status At the moment, CSCS is an early adopter of the following components: * CREAM CE * APEL * WN * SLURM
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
png
Phoenix_LCG_Services_Nov_2013.png
r1
manage
302.7 K
2013-11-17 - 23:08
MiguelGila
jpg
Phoenix_LCG_Services_Phase_D.jpg
r2
r1
manage
90.9 K
2011-05-25 - 07:45
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E_-_after_move.jpg
r1
manage
85.1 K
2012-05-29 - 09:04
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E.jpg
r1
manage
79.8 K
2011-12-09 - 12:58
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_F.jpg
r1
manage
85.2 K
2012-08-21 - 08:13
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_G.jpg
r1
manage
87.8 K
2013-02-21 - 10:37
PabloFernandez
png
phoenix_services_future.png
r1
manage
284.2 K
2018-02-05 - 15:35
DinoConciatore
Phoenix_LCG_Services_FEB_2018
png
phoenix_services_notit.png
r1
manage
204.9 K
2016-09-21 - 12:33
DinoConciatore
Phoenix Services 2016
png
phoenix_services.png
r1
manage
258.6 K
2017-12-19 - 08:37
DinoConciatore
Phoenix_LCG_Services_Nov_2017
pptx
Phoenix_Training_session.pptx
r1
manage
45.7 K
2011-05-30 - 13:35
PabloFernandez
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r60
|
r53
<
r52
<
r51
<
r50
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r51 - 2018-02-05
-
DinoConciatore
LCGTier2
Log In
(Topic)
LCGTier2 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Users
Entry point / Contact
RoadMap
ATLAS Pages
CMS Pages
CMS User Howto
CHIPP CB
Outreach
Technical
Cluster details
Services
Hardware and OS
Tools & Tips
Monitoring
Logs
Maintenances
Meetings
Tests
Issues
Blog
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
LCGTier2 Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Warning: Can't find topic "".""
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback