Tags:
view all tags
!-- keep this as a security measure: * Set ALLOWTOPICCHANGE =<span data-mce-mark="1"> Main.TWikiAdminGroup,Main.LCGAdminGroup</span> * Set ALLOWTOPICRENAME =<span data-mce-mark="1"> Main.TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.LCGAdminGroup</span> --> ---+!! <img alt="phoenix_services_future.png" height="990" src="%ATTACHURL%/phoenix_services_future.png" style="background-color: transparent; color: #000000; font-size: small;" title="phoenix_services_future.png" width="1735" /> %TOC% ---+ Specifications, standard Benchmark values ---++ CPU HS06 = HEP-SPEC06 benchmark value (see details in [[http://w3.hepix.org/benchmarks/doku.php/][HEP-SPEC06 webpage]]) (Note: 1 HS06 = 1 GFlop, approximately) %RED%Please note: the following figures reflect the INSTALLED capacity. Pledge capacity is 10% lower due to various inefficiencies.%ENDCOLOR% %STARTSECTION{name="Cpu" type="section"}% %TABLE{caption="CSCS Tier-2 LCGonCray CPU counts and performance benchmarks"}% | *Nodes* | *Description* | *Processors* | *Cores/node* | *Total cores* | *HS06/core* | *Total HS06* | *RAM/core* | | 172 | Xeon E5-2695 v4 @ 2.10GHz | | 68 | 3876 | 12.96 | 50232 | 2 GB | | | Total | | | <span data-mce-mark="1">%CALC{"$SUM($ABOVE())"}%</span> | | <span data-mce-mark="1">%CALC{"$SUM($ABOVE())"}%</span> | | Intel® Xeon E5-2695 v4 @ 2.10GHz %ENDSECTION{name="Cpu" type="section"}% These CPU resources are divided among the VOs with fairshare (data taken from the last week of usage) and a few reserved slots (for exclusive usage): * ATLAS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * CMS: 40% fairshare (max of 2000 jobs running, 4000 in queue). * LHCb: 20% fairshare (max of 1500 jobs running, 4000 in queue). _Last update: August 2018_ ---++ Storage infrastructure ---+++ Central Storage - dCache The dCache services are provided by 12 servers (10 "Storage Elements" and 2 "Head Nodes") <br />Storage Elements are connected to the storage systems through the CSCS SAN Total space available on dCache is<b> 4.6 PB</b> and is distributed in <b>40% - 40% - 20% </b>(ATLAS/CMS/LHCb) fashion. _Last update: November 2019_ ---+++ Scratch - Spectrum Scale The 'scratch' filesystem is available to Compute nodes through special DVS nodes and is provided by a Spectrum Scale cluster of 16 nodes (4 meta + 12 data). Metedata is located on Flash Storage The filesystem is featuring a policy based pool tiering<br />Tier1: SSD based data pool (90TB) where all files are written <br />Tier2: HDD based data pool (380TB) where less frequently used data is moved (this mechanism is transparent for the clients) _%BLACK%Last update: <i>November 2019</i><br />%ENDCOLOR%_ ---++ Network connectivity ---+++ External network CSCS has a 100 Gbit/s Internet connection (by SWITCH). The Phoenix cluster is connected to Internet via a switch that has a total of 80 Gbit/s throughput capability. ---+++ Internal network Based on Infiniband QDR/FDR, all nodes are connected to a Voltaire/Mellanox Fat-Tree topology network (blocking factor 5), that provides every node with 32 Gbit/s, and 192 Gbit/s bandwidth between the two farthest nodes. The new nodes (phase H) are connected to an Infiniband FDR fabric with uplinks to the Voltaire/Mellanox QDR network. Virtual machines (together with CSCS and internet access) are connected with Ethernet, and attached to the Infiniband Switch via two Voltaire E4036 transparent bridges, with a maximum capacity of 20 Gb/s in Active/Passive mode. ---++ Cluster Management Nodes Most of the service nodes are virtualized. To see a complete list please visit the FabricInventory#Virtual_Machines The rest is not virtual. It's basically the WNs, dCache nodes (core and pools), scratch FS, NFS, and CernVMFS Individual services are described in ServiceInformation ---++ EMI early adopters status At the moment, CSCS is an early adopter of the following components: * CREAM CE * APEL * WN * SLURM
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
png
phoenix_services_future.png
r1
manage
284.2 K
2018-02-05 - 15:35
DinoConciatore
Phoenix_LCG_Services_FEB_2018
png
phoenix_services.png
r1
manage
258.6 K
2017-12-19 - 08:37
DinoConciatore
Phoenix_LCG_Services_Nov_2017
png
phoenix_services_notit.png
r1
manage
204.9 K
2016-09-21 - 12:33
DinoConciatore
Phoenix Services 2016
png
Phoenix_LCG_Services_Nov_2013.png
r1
manage
302.7 K
2013-11-17 - 23:08
MiguelGila
jpg
Phoenix_LCG_Services_Phase_G.jpg
r1
manage
87.8 K
2013-02-21 - 10:37
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_F.jpg
r1
manage
85.2 K
2012-08-21 - 08:13
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E_-_after_move.jpg
r1
manage
85.1 K
2012-05-29 - 09:04
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_E.jpg
r1
manage
79.8 K
2011-12-09 - 12:58
PabloFernandez
pptx
Phoenix_Training_session.pptx
r1
manage
45.7 K
2011-05-30 - 13:35
PabloFernandez
jpg
Phoenix_LCG_Services_Phase_D.jpg
r2
r1
manage
90.9 K
2011-05-25 - 07:45
PabloFernandez
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r60
|
r58
<
r57
<
r56
<
r55
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r56 - 2019-11-11
-
DarioPetrusic
LCGTier2
Log In
(Topic)
LCGTier2 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Users
Entry point / Contact
RoadMap
ATLAS Pages
CMS Pages
CMS User Howto
CHIPP CB
Outreach
Technical
Cluster details
Services
Hardware and OS
Tools & Tips
Monitoring
Logs
Maintenances
Meetings
Tests
Issues
Blog
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
LCGTier2 Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Warning: Can't find topic "".""
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback