implement postgress backup script to copy DB to t3nfs02:/zfs/data01/swshare/postgres
dcache after upgrade became too verbose and filled out /var/log partition; to fix the problem dcache restart was done on Sun Mar 15 (without user activity)
Storage Cleaning due to almost no free space on dcache:
deletion of leftover user data took several days. Too many (hundred thousands) files in single directories: dcache can't handle it
overal clenup brought ~ 30% free space; next step is needed - check and clean ~150TB of mc, data dirs
Slurm:
add QoS (500 cpu/user) to quick partition
EOS test configuration (enabled on Worker Nodes and UIs): since February no user feedback
Monitoring:
manually added non-standard /work server t3nfs02 to ganglia
solved the problem with SELinux (the reason of http access error) on ganglia server; works stable