create new tag
view all tags

Phoenix Cluster Road Map

This page displays the situation where our Cluster is, where is it going to, and where does it come from. It can be specially useful to have an overview of the changes made so far, and the plans for the future.


Phase Q

Resource Provided Pledged 2021 Notes
Computing (HS06) 220000 198400 ATLAS 74240 (37.42%), CMS 68480 (34.52%), LHCb 55680 (28.07%)
Storage (TB) 6155 6155 2574 ATLAS, 2490 CMS, 1091 LHCb
Please note:
  • They are calculated using projections from the RRB (resource review board) and agreed with SNF.
  • Above resources differ from CRIC due to a misunderstanding. This was clarified with the WLCG office in November 2020.
  • The extension of the foreseen storage capacity was not available until July'21 due to problems in the chip supply chain.

Phase P

This is the first year all compute and storage was delivered as shared resources

Resource Provided Pledged 2020 Notes
Computing (HS06) 179193 161200 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 5310 5300 40% ATLAS, 40% CMS, 20% LHCb
  • Compute resources: expand from 172 to 218 nodes

Phase O

The phoenix cluster was decommissioned and all compute was provided with shared resources on Piz Daint

Resource Provided Pledged 2019 Notes
Computing (HS06) 141096 125000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 4617 4600 40% ATLAS, 40% CMS, 20% LHCb
  • Compute resources: complete replacement of Phoenix compute nodes by shared resources, which expand from 57 to 172 nodes;

Phase N

Resource Provided Pledged 2018 Notes
Computing (HS06) 111100 96000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 4015 4000 40% ATLAS, 40% CMS, 20% LHCb
The following activities cover the 2017-2018 period
  • Completely “puppetized” all services and WN (Puppet is a configuration management tool)
  • Use CSCS central services for Storage, GPFS, logs, VMs and Compute (partially)
  • Started using ARC data transfer service for more efficient job submission with data staging
  • Installed new gateways 4x40G to maximize usage of CSCS internet link
  • SSD caching layer for GPFS for improved efficiency with small files
  • Elastic resource allocation for Tier-0 spill-over tests (using HPC resources and leveraging on the previous LHConCRAY project results)
  • Update storage nodes to IPv6 and dCache 3.X (10th October, 2018)

Phase M

This is the first phase when shared resources from Piz Daint were added to the Phoenix cluster, resulting in a total of

Resource Provided Pledged 2017 Notes
Computing (HS06) 87400 78000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 4015 4000 40% ATLAS, 40% CMS, 20% LHCb
  • A deviation from the original plan was done to anticipate 400 TB from 2018 to 2017, due to the increasing interest of ATLAS to become a Nucleus in Switzerland (pledge was 3600 TB before; the 40/40/20 split in storage is applied before the extra 400)

Phase L

Resource Provided Pledged 2016 Notes
Computing (HS06) 78738 49000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 3853 3500 40% ATLAS, 40% CMS, 20% LHCb
The following activities cover the 2015-2016 period
  • Renewed cluster configuration with Puppet and complete reinstallation of the computing resources (nodes, scheduler, middleware) with new state of the art OS/software
  • Decommissioning of all CreamCEs [6] in favor of Nordugrid ARC middleware for simplification
  • Installation of new central logging facility with Graylog, elasticsearch and Kibana [6]
  • All virtual machines migrated to central CSCS VMWare facility
  • Installation of new capacity as usual. An exceptional, unexpected market and technology advantage presented itself in early 2016 that allowed CSCS to increase the total compute raw ca-pacity from 42 to 69 kHS06, instead of the planned 59 kHS06.

Phase K

Resource Provided Pledged 2015 Notes
Computing (HS06) 66095 49000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 3340 3070 44% ATLAS, 44% CMS, 12% LHCb

Phase J

Resource Provided Pledged 2014 Notes
Computing (HS06) 39098 35000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 2081 2300 44% ATLAS, 44% CMS, 12% LHCb

Phase H

Deployed. Consists on:

  • 6800 HS06 computing power, which corresponds to 16 Intel IvyBridge compute nodes.
  • 280 TB of central storage, or one building block with DCS3700 as of beginning of 2013.
  • 8 GB/s scratch filesystem, to replace PhaseC GPFS, consisting of 6 IBM servers and 2 pairs of NetApp E5500B controllers with 60 disks each.
  • 2 Mellanox Infiniband FDR switches.
  • 2 Cisco 2248 Network switches.
  • 2 New virtualisation servers.

Resource Provided Pledged 2013 Notes
Computing (HS06) 28826 26000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 1919 1800 44% ATLAS, 44% CMS, 12% LHCb
Main achievements include:
  • Migration of Phoenix from Torque/Moab to SLURM
  • New accounting system (APEL4)
  • Acceptance of multicore jobs.
  • Migration of the SE to dCache 2.6

Phase G

Announced the 5th of March of 2013, and staggered deployed since the end of 2012, its main purpose was to meet the 2013 pledges.

  • PhaseC Thors had to be decommissioned, due to increased issues with them, and were replaced first by CSCS-lent hardware, and then by six IBM DCS3700 controllers (three couplets) with 60 x 3 TB disks each. Two of those controllers provide Storage space (279 TB) that is not pledged, in case it has to be moved in an emergency to Scratch.
  • Various sets of compute nodes, purchased directly from other projects (Weisshorn and Pilatus), were gradually introduced, summing up 20 nodes (+6660 HS06), similar to those already introduced after the move to Lugano.
  • Full replacement of the Ethernet infrastructure in the cluster

Resource Provided Pledged 2013 Notes
Computing (HS06) 24198 23000 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 1303 (+279) 1300 50% ATLAS, 50% CMS

  • Reinstallation of WNs to UMD-1
  • CernVMFS set for all three main VOs.
  • Installation of two new virtual machine hosts, with SSD drives, for good DB performance.
  • Upgrade of dCache from version 1.9.5 to 1.9.12
  • New protocol supported: xRootd door installed in dCache
  • Warranty extended for Storage components to 4 years. Rest is left 3 years.
  • Phoenix training procedures course to other CHIPP sysadmins
  • ActivitiesOverview2013

Phase F

Deployed the 21st of August of 2012, as a compute expansion to meet 2012 pledges.

  • An important milestone is also the move to a new datacenter in Lugano. BlogLuganoBuildingMove
  • The old PhaseC compute nodes were decommissioned after the move (not relocated), and replaced by 36 SandyBridge Intel nodes (2 x 8 HT cores, 2.6 GHz, 64 GB RAM, 333 HS06 each)
  • Later, a small increase in compute, consisting of 10 more similar nodes, was purchased to meet the pledges in July.
  • The infiniband network was replaced by 5 Voltaire 4036 Switches.

Resource Provided Pledged 2012 Notes
Computing (HS06) 17538 17400 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 1095 1090 50% ATLAS, 50% CMS

  • Simplification of the network infrastructure, moving to mixed Ethernet-Infiniband bridged network to pure Infiniband, with a transparent bridge.
  • The external network connection was also upgraded to 10 Gbit/s
  • three new services were added: CernVMFS, Argus03 and Cream03.
  • Cluster declared as most efficient Atlas site in hammercloud tests.

Phase E

Deployed the 20th of December of 2011, consisted of:

  • Decommissioning of the old Thumpers from PhaseA &B (475 TB)
  • Addition of IBM ds3500 disk space (+405 TB, +95 TB that came from the previous testing GPFS)
  • Upgrade to Interlagos (16 cores/CPU) of the 10 AMD nodes from PhaseD (+250 HS06)
  • Complete removal of Lustre as a Scratch filesystem, in favor of GPFS, using the PhaseC Lustre hardware, with two SSD PCI cards for metadata.

Resource Provided Pledged 2011 Notes
Computing (HS06) 13740 13550 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 1095 976 50% ATLAS, 50% CMS

  • NewEMIRequirements. Work plans within new EMI infrastructure, things we have to adapt to.
  • Some hardware was given to other Swiss universities: BlogPhaseBCHardwareGiveaway
  • Previous Scratch filesystem was substituted by GPFS, with Metadata on Solid State Drives greatly improving reliability and speed on the file metadata operations.
  • Began migrating some virtual services to KVM, with an integrated management tool (convirture), and the possibility for live migration.

Phase D

In production from the 9th of March of 2011, as a plain extension to PhaseC. Consisted of

  • 10 new compute nodes with AMD 6172 12-core processors and 3 GB RAM,
  • Three IBM DS3500 controllers: two dedicated to dCache (90 x 2 TB disks each) and one to Scratch. This new scratch expansion was used as a test production instance of GPFS, to evaluate the alternative as opposed to Lustre, that was giving significant trouble.

Resource Provided Pledged 2011 Notes
Computing (HS06) 13488 13550 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 976 976 50% ATLAS, 50% CMS

Phase C

Delivered in November 2009, put into production the 31st of March of 2010. PhaseC was a total rebuild of the computing/service nodes, and an expansion to the existing storage with 10 new dCache pools (Thors) Consisted in 96 compute nodes (SunBlades E5540 installed with SL5), and 38 disk nodes (28 x Thumpers X4500 with Solaris, + 10 Thors X4540 with Linux - 40 x 1 TB disks).

Initially the nodes were provided without HyperThreading, with job slots 768 slots (8 cores/CPU, 3 GB RAM/core), but it did not provide enough computing power, so HyperThreading was enabled (even though only 12/16 cores were used), that enhanced the performance and increased the number of slots to 1152, but reduced the amount of memory to 2 GB/slot.

Resource Provided Pledged 2010 Notes
Computing (HS06) 11520 10560 40% ATLAS, 40% CMS, 20% LHCb
Storage (TB) 910 910 50% ATLAS, 50% CMS

Phase B

Delivered by Sun in September 2008. It Was a planned expansion over PhaseA, and consisted in a total of 60 compute nodes (SunBlades X2200 installed with SLC4), and 29 disk nodes (Thumpers X4500 with Solaris, 48 x 500 GB disks each).

Resource Provided Pledged 2009 Notes
Computing (HS06) ~6000 (1574.4 SpecInt2000) 5760  
Storage (TB) 517 490  

Phase A

First phase of Phoenix. Delivered from Sun the 20th of November of 2007 (Tender released in November 2006: The plan was to build three phases, to be delivered by the end of 2007, 2008 and 2009). It was composed of 30 compute nodes (SunBlades X2200 installed with SLC4), and 12 disk nodes (Thumpers X4500 with Solaris, 48 x 500 GB disks each).

The second MoU was signed the 27th of March, 2007.

Resource Provided Pledged 2008 Notes
Computing (HS06) ~3100 (787 kSI2000) ~2700 (680 kSI2000)  
Storage (TB) 280 225  

Phase 0

This was the test cluster deployed to evaluate alternatives.

Resource Provided Pledged 2007 Notes
Computing (HS06)   ~800 (200 kSI2000)  
Storage (TB)   61  

Dalco Cluster

The first attempt for a Swiss Tier2 was written for 2004. The first machines from Dalco were deployed in 2005, consisting of 15 compute nodes (2 x Intel Xeon E7520 @ 3 GHz), 1 storage node (4 x 4.8 TB) and 1 master node. The first MoU was written in May 2005.

Resource Provided Pledged 2006 Notes
Computing (HS06) ~180 (45 kSI2000) ~180 (45 kSI2000)  
Storage (TB) 9 9  
-- PabloFernandez - 2011-01-20
Edit | Attach | Watch | Print version | History: r23 < r22 < r21 < r20 < r19 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r23 - 2021-06-15 - NickCardo
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2021 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback