create new tag
view all tags

Policies for the resource allocation on the PSI Tier-3

These policies were agreed upon in the first and second Steering Board meetings.

  1. We organize the users along Physics Groups
  2. For the purpose of the cluster organization, every user must be mapped exactly to one physics group (even though the user may work for several).
  3. The resources available to a physics group consist of the added up resources of its members. How these resources are used is up to the physics group's internal organization.
  4. Each physics group has one Responsible User who
    • takes care of managing the group's resources (e.g. deciding what data sets to delete).
    • is the single point of contact for the cluster administrators for organizational issues.
    • can propose a guest user (see below).
  5. The resources are equipartitioned between users.

User Interface ( UIs ) policies

OS UI Hostname users group Notes
RHEL7 t3ui01 PSI 128GB RAM, 72cores, 5TB /scratch
RHEL7 t3ui02 ETHZ 128GB RAM, 72cores, 5TB /scratch
RHEL7 t3ui03 UNIZ 128GB RAM, 72cores, 5TB /scratch

Home policies

  • Every user has home directory /t3home/${USER} with default 10GB quota for development of software, documentation, etc.
    It's configured with 1 daily, 1 weekly and 1 monthly snapshots placed beyond quota limit at /t3home/username/.snapshot
To check your quota allotment on /t3home use the following command:
~% quota -s -f /t3home
Disk quotas for user username (uid user_id): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
t3nfs:/t3/home1    704M   10240M   11264M          927   4295m   4295m        

  • For big size files there is /work/${USER} volume with effective space ~100GB and 2 daily snpashots.
    /shome is a link to /work
    Snapshots are a part of work quota. In a case quota is completely filled only administrator is in charge to help. Requests must be sent to the admin mailing list.
To check the used space on /work is possible with du command: du -hs /work/${USER}

UIs /scratch and /tmp usage

Sometimes shared working /tmp or the /scratch partitions of UIs and WNs are full because some user had filled them with big and later forgotten files or simply because a job went crazy. There was a clear users requirement to manage this space by themselves, so that there is no automatic cleaning. Please clean it timely and remember that /scratch is the least protected area and doesn't dedicated to keep important data for a long.

If you discover an abuse, write or call your colleague and invite him/her to cleanup. Otherwise your peers work could be blocked.

Administrators can help when group members are still interested in outdated user files presence in scratch. In this case the owner has to be explicitly changed to a new responsible person.

Edit | Attach | Watch | Print version | History: r65 < r64 < r63 < r62 < r61 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r65 - 2020-02-04 - NinaLoktionova
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback