Tags:
meeting
1
SwissGridOperationsMeeting
1
view all tags
<!-- keep this as a security measure: * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.LCGAdminGroup,Main.EgiGroup * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.LCGAdminGroup #uncomment this if you want the page only be viewable by the internal people #* Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.LCGAdminGroup,Main.ChippComputingBoardGroup --> ---+ Swiss Grid Operations Meeting on 2016-07-07 at 14:00 * *Place*: Vidyo (room: Swiss_Grid_Operations_Meeting, extension: 10537598) * *External link*: https://vidyoportal.cern.ch/flex.html?roomdirect.html&key=FAEn4zjAba7BqoQ11TGZu66VSDE * *Phone gate*: From Switzerland: 0227671400 (portal) + 10537598 (extension) + # (pound sign) * *IRC chat*: irc:gridchat.cscs.ch:994#lcg (ask pw via email) * *Switch Vidyo SIP IP*: 137.138.248.204 %TOC% ---++ Site status ---+++ CSCS * Xxx * Accounting numbers (from scheduler) from last month ---+++ PSI * Upgraded my 2 HP CentOS7 NFSv4 NAS to [[http://zfsonlinux.org/][ZoL v0.6.5.7]] * 1st is the primary NAS featuring 24 SAS disks 15k 600GB * 2ns is the secondary NAS featuring 12 SATA disks 7.2k 3000GB ( cold backup ) * both owns a dual 10Gb/s card put in LACP bonding mode <del> * dCache on ZoL </del><ins> * on the secondary NAS I'm going to make a ZFS fs for dCache and provide ~5TB to the PSI T3 ; it's a shame to use this HW only for backups ( 5y warranty ) </ins><del> * again on the secondary NAS I made ZFS fs for dCache : </del><ins> * [[http://t3mon.psi.ch/ganglia/PSIT3-custom/accounting.txt][Accounting numbers (from scheduler) from last month]] </ins><del> * <pre>[root@t3nfs02 ~]# zfs list -d1 </del><del>NAME USED AVAIL REFER MOUNTPOINT </del><ins>---+++ UNIBE-LHEP </ins><del>data01 1.33T 9.15T 32.0K /zfs/data01 </del><del>data01/dcache 100G 9.15T 32.0K %BLUE%/zfs/data01/dcache%ENDCOLOR% </del><ins>*Operations* </ins><del>data01/t3nfs01_data01 1.23T 9.15T 32.0K /zfs/data01/t3nfs01_data01 </del><ins> * tough month: several issues with full root partitions on wn's and one lustre oss not working well. Also the cloud cluster didn't perform too well (didn't follow-up with SWITCH yet) </ins><del>data02 4.33T 6.15T 32.0K /zfs/data02 </del><ins>*ATLAS specific operations* </ins><del>data02/dcache 100G 6.15T 32.0K %BLUE%/zfs/data02/dcache%ENDCOLOR% </del><ins> * </ins><del>data02/t3nfs01_data01 4.23T 6.15T 32.0K /zfs/data02/t3nfs01_data01 </del><ins> * ICHEP conference in August => steep rise in analysis jobs (lustre suffers) </ins><del></pre> </del><ins> * One user jobs very instrumental in killing the shared file system. Could not discover exactly what was wrong with these and had not the time to follow up, so ended up bannign analysis temporarily </ins><del> * dCache tuning </del><ins> * Also plenty of data intensive prod workloads (mainly derivations) runnign concurrently (lustre suffers more) </ins><del> * <pre>[root@t3se01 layouts]# grep max /etc/dcache/layouts/t3se01.conf </del><ins> * Issue with some event generation workloads (madgraph) writing large files in /tmp. Root are too small On SunBlade nodes to absorbe that, even with a very aggressive cleanup cron job. Ended up having to ban evegen+simulation from the site as a temporary measure! </ins><del>srm.request.max-requests=400 </del><ins>*HammerCloud report [1]* </ins><del>srm.request.put.max-requests=100 </del><ins> * UNIBE-LHEP online >79% (last month). Reflects the instabilities mentioned above </ins><del>srm.request.get.max-inprogress=100 </del><ins> * UNIBE-ID 99% </ins><del>srm.request.copy.max-inprogress=100 </del><ins> * UNIBE-LHEP_CLOUD* 71% </ins><del>srm.request.max-transfers=100 </del><ins>[1] http://dashb-atlas-ssb.cern.ch/dashboard/request.py/siteviewhistorywithstatistics?columnid=562&view=Shifter%20view#time=720&start_date=&end_date=&use_downtimes=false&merge_colors=false&sites=multiple&clouds=ND&site=UNIBE-LHEP,UNIBE-LHEP-UBELIX,UNIBE-LHEP-UBELIX_MCORE,UNIBE-LHEP_CLOUD,UNIBE-LHEP_CLOUD_MCORE,UNIBE-LHEP_MCORE </ins><del></pre> </del><del> * [[http://t3mon.psi.ch/ganglia/PSIT3-custom/accounting.txt][Accounting numbers (from scheduler) from last month]] </del><ins>*ATLAS resource delivery [2]* </ins> <del>---+++ UNIBE-LHEP </del><ins> * All jobs: 56% of ATLAS/CH (WallTime), 77% of ATLAS/CH (CPUtime) </ins><del> * Xxx </del><ins> * Good jobs: 69% of ATLAS CH (WallTime), 79% of ATLAS/CH (CPUtime) </ins><del> * Accounting numbers (from scheduler) from last month </del><ins>[2] http://dashb-atlas-job-prototype.cern.ch/dashboard/request.py/dailysummary#button=cpuconsumption&sites%5B%5D=CSCS-LCG2&sites%5B%5D=UNIBE-LHEP&sitesCat%5B%5D=All+Countries&resourcetype=All&sitesSort=2&sitesCatSort=0&start=2016-06-01&end=2016-06-30&timerange=daily&granularity=Monthly&generic=0&sortby=0&series=All </ins> <del>---+++ UNIBE-ID </del><ins> * <strong>Accounting numbers (from scheduler) from last month (Jun 2016)</strong> ( includes ce03/CLOUD ) </ins><del> * Mostly smooth operation </del><ins> * WC h: 960084 (ATLAS) - 1172 (t2k.org) - 1104 (uboone) - 16 (ops) </ins><del> * Procurement: </del><ins> * <strong>Accounting numbers (from ATLAS dashboard) from last month</strong> (Jun 2016) </ins><del> * 80 new server (76*20 + 4*16 => 1584 new cores; disontinued 144 cores (oldest nodes) </del><ins> * CPU h: 858693 (May value: 1194137) </ins><del> * installed and provisioned </del><ins> * WC h: 1057196 (May value: 1358408) </ins><del> * Migration from OGSGE => Slurm planned for Q4 </del><del> * Probs with NAMD jobs (using ibverbs directly) => low level IB errors from mlx4 regarding qp </del><ins>---+++ UNIBE-ID </ins><del> * no errors with MPI jobs using ompi or the like </del><ins> * Xxx </ins> * no errors with storage (GPFS over RDMA) * ATLAS specific: large number of random a-rex crashes within the last 2 weeks * reason unknown, happened 24x between 2016-06-15 and last monday; no crash since 3 days ---+++ UNIGE <del> * Operations </del><ins> * Xxx </ins><del> * 10 machines added into the batch system (80 cores) + 3 machines replaced: </del><ins> * Accounting numbers (from scheduler) from last month </ins> * DELL - Intel Xeon @ 2.4 GHz - with 8 cores and 48 GB of memory * RAID controller: Common problem for our DPM and NFS File servers (It happened like 3/4 times during last months) * Increased activity from DPNC users to run in the batch system (other groups, in addition to ATLAS) * Still not in ATLAS production, problems related with memory (hints provided by Gianfranco) * Data Management: * User datasets from UniGe for ATLASLOCALGROUPDISK at CSCS deleted (space can be moved to ATLASSCRATCHDISK) * Some problems for central deletion (fixed) - permissions related: https://ggus.eu/index.php?mode=ticket_info&ticket_id=122024 * [[%ATTACHURL%/g07.2016.06.log][Accounting numbers (from scheduler) from last month]] ---+++ NGI_CH * Xxx * NGI-CH Open Tickets review ---++ Other topics * Topic1 * Topic2 Next meeting date: ---++ A.O.B. ---++ Attendants * CSCS: * CMS: <del> * ATLAS: Michael Rolli (UNIBE-ID) => absent being ill, nevertheless some text above </del><ins> * ATLAS: </ins> * LHCb: * EGI: ---++ Action items * Item1
Attachments
Attachments
Topic attachments
I
Attachment
History
Action
Size
Date
Who
Comment
log
g07.2016.06.log
r1
manage
1.1 K
2016-07-07 - 11:05
LuisMarch
Accounting
UniGe
June 2016
Edit
|
Attach
|
Watch
|
P
rint version
|
H
istory
:
r16
|
r9
<
r8
<
r7
<
r6
|
B
acklinks
|
V
iew topic
|
Raw edit
|
More topic actions...
Topic revision: r7 - 2016-07-07
-
GianfrancoSciacca
LCGTier2
Log In
(Topic)
LCGTier2 Web
Create New Topic
Index
Search
Changes
Notifications
Statistics
Preferences
Users
Entry point / Contact
RoadMap
ATLAS Pages
CMS Pages
CMS User Howto
CHIPP CB
Outreach
Technical
Cluster details
Services
Hardware and OS
Tools & Tips
Monitoring
Logs
Maintenances
Meetings
Tests
Issues
Blog
Home
Site map
CmsTier3 web
LCGTier2 web
PhaseC web
Main web
Sandbox web
TWiki web
LCGTier2 Web
Users
Groups
Index
Search
Changes
Notifications
RSS Feed
Statistics
Preferences
P
View
Raw View
Print version
Find backlinks
History
More topic actions
Edit
Raw edit
Attach file or image
Edit topic preference settings
Set new parent
More topic actions
Warning: Can't find topic "".""
Account
Log In
Edit
Attach
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback