Swiss Grid Operations Meeting on 2016-07-07 at 14:00
Site status
CSCS
- Xxx
- Accounting numbers (from scheduler) from last month
PSI
- Upgraded my 2 HP CentOS7 NFSv4 NAS to ZoL v0.6.5.7
- 1st NAS is the primary featuring 24 SAS disks 15k 600GB
- 2ns NAS is the secondary featuring 12 SATA disks 7.2k 3000GB ( cold backup )
- both feature a dual 10Gb/s card put in LACP bonding mode
* dCache on ZoL
* on the secondary NAS I'm going to make a ZFS fs for dCache and provide ~5TB to the PSI T3 ; it's a shame to use this HW only for backups ( 5y warranty )
* again on the secondary NAS I made ZFS fs for dCache :
* Accounting numbers (from scheduler) from last month
* [root@t3nfs02 ~]# zfs list -d1
NAME USED AVAIL REFER MOUNTPOINT
---+++ UNIBE-LHEP
data01 1.33T 9.15T 32.0K /zfs/data01
data01/dcache 100G 9.15T 32.0K /zfs/data01/dcache
*Operations*
data01/t3nfs01_data01 1.23T 9.15T 32.0K /zfs/data01/t3nfs01_data01
* tough month: several issues with full root partitions on wn's and one lustre oss not working well. Also the cloud cluster didn't perform too well (didn't follow-up with SWITCH yet)
data02 4.33T 6.15T 32.0K /zfs/data02
*ATLAS specific operations*
data02/dcache 100G 6.15T 32.0K /zfs/data02/dcache
*
data02/t3nfs01_data01 4.23T 6.15T 32.0K /zfs/data02/t3nfs01_data01
* ICHEP conference in August => steep rise in analysis jobs (lustre suffers)
* One user jobs very instrumental in killing the shared file system. Could not discover exactly what was wrong with these and had not the time to follow up, so ended up bannign analysis temporarily
* dCache tuning
* Also plenty of data intensive prod workloads (mainly derivations) runnign concurrently (lustre suffers more)
* [root@t3se01 layouts]# grep max /etc/dcache/layouts/t3se01.conf
* Issue with some event generation workloads (madgraph) writing large files in /tmp. Root are too small On SunBlade nodes to absorbe that, even with a very aggressive cleanup cron job. Ended up having to ban evegen+simulation from the site as a temporary measure!
srm.request.max-requests=400
*HammerCloud report [1]*
srm.request.put.max-requests=100
* UNIBE-LHEP online >79% (last month). Reflects the instabilities mentioned above
srm.request.get.max-inprogress=100
* UNIBE-ID 99%
srm.request.copy.max-inprogress=100
* UNIBE-LHEP_CLOUD* 71%
srm.request.max-transfers=100
[1] http://dashb-atlas-ssb.cern.ch/dashboard/request.py/siteviewhistorywithstatistics?columnid=562&view=Shifter%20view#time=720&start_date=&end_date=&use_downtimes=false&merge_colors=false&sites=multiple&clouds=ND&site=UNIBE-LHEP,UNIBE-LHEP-UBELIX,UNIBE-LHEP-UBELIX_MCORE,UNIBE-LHEP_CLOUD,UNIBE-LHEP_CLOUD_MCORE,UNIBE-LHEP_MCORE
* Accounting numbers (from scheduler) from last month
*ATLAS resource delivery [2]*
---+++ UNIBE-LHEP
* All jobs: 56% of ATLAS/CH (WallTime), 77% of ATLAS/CH (CPUtime)
* Xxx
* Good jobs: 69% of ATLAS CH (WallTime), 79% of ATLAS/CH (CPUtime)
* Accounting numbers (from scheduler) from last month
[2] http://dashb-atlas-job-prototype.cern.ch/dashboard/request.py/dailysummary#button=cpuconsumption&sites%5B%5D=CSCS-LCG2&sites%5B%5D=UNIBE-LHEP&sitesCat%5B%5D=All+Countries&resourcetype=All&sitesSort=2&sitesCatSort=0&start=2016-06-01&end=2016-06-30&timerange=daily&granularity=Monthly&generic=0&sortby=0&series=All
---+++ UNIBE-ID
* Accounting numbers (from scheduler) from last month (Jun 2016) ( includes ce03/CLOUD )
* Mostly smooth operation
* WC h: 960084 (ATLAS) - 1172 (t2k.org) - 1104 (uboone) - 16 (ops)
* Procurement:
* Accounting numbers (from ATLAS dashboard) from last month (Jun 2016)
* 80 new server (76*20 + 4*16 => 1584 new cores; disontinued 144 cores (oldest nodes)
* CPU h: 858693 (May value: 1194137)
* installed and provisioned
* WC h: 1057196 (May value: 1358408)
* Migration from OGSGE => Slurm planned for Q4
* Probs with NAMD jobs (using ibverbs directly) => low level IB errors from mlx4 regarding qp
---+++ UNIBE-ID
* no errors with MPI jobs using ompi or the like
* Xxx
* no errors with storage (GPFS over RDMA)
- ATLAS specific: large number of random a-rex crashes within the last 2 weeks
- reason unknown, happened 24x between 2016-06-15 and last monday; no crash since 3 days
UNIGE
* Operations
* Xxx
* 10 machines added into the batch system (80 cores) + 3 machines replaced:
* Accounting numbers (from scheduler) from last month
* DELL - Intel Xeon @ 2.4 GHz - with 8 cores and 48 GB of memory
-
- RAID controller: Common problem for our DPM and NFS File servers (It happened like 3/4 times during last months)
- Increased activity from DPNC users to run in the batch system (other groups, in addition to ATLAS)
- Still not in ATLAS production, problems related with memory (hints provided by Gianfranco)
- Data Management:
- Accounting numbers (from scheduler) from last month
NGI_CH
- Xxx
- NGI-CH Open Tickets review
Other topics
Next meeting date:
A.O.B.
Attendants
* ATLAS: Michael Rolli (UNIBE-ID) => absent being ill, nevertheless some text above
* ATLAS:
* LHCb:
Action items