T3 Downtime due to PSI yearly Compute Center Maintenance -- 05. 01. 2022 DerekFeichtinger
Downtime will last from 16:00h Fri, 7. Jan until 10:00h on Mon, 10. Jan

Monitoring

*Currently monitoring page doesn't show real plots due to batch system migration and problems with monitoring node installation. Sorry for inconvenience. NL.*

Overview of PSI Tier3 fileservers Overview of PSI Tier3 services Overview of PSI Tier3 workers

Batch jobs (queuing system)

Current queue / accounting

Number of running and queued jobs:


Ganglia WN page

Storage

/pnfs dir

Links:

Show storage space graphs for

/pnfs dir I/O queues

  • regular I/O queue movers = dcap/gsidcap/LAN xrootd movers (heavy random IO for internal analysis) ;
  • wan I/O queue movers = SRM/gridftp movers (transfers of whole files also from outside) ;
  • xrootd I/O queue movers = WAN xrootd movers ;
  • To check by CLI the I/O queues run from a UI watch -n 1 -d  lynx --dump --width=200 'http://t3dcachedb:2288/queueInfo' e.g. if your jobs are not progressing it might be due to a file server with too many queued movers ; in this case you can inform by email T3 admins.


/mnt/t3nfs01/data01/{shome,swshare} dirs NEW

User Space Report

Networking and File Transfers (+ PhEDEx)

Links:

Plotting interval:


Availability reports

These tests are run by the centralized Grid monitoring services and they determine whether the T3 or the T2 are considered to be working correctly:

Edit | Attach | Watch | Print version | History: r121 < r120 < r119 < r118 < r117 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r118 - 2020-01-28 - NinaLoktionova
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback