Filesystem Inodes IUsed IFree IUse% Mounted on /dev/gpfs 150626304 51891413 98734891 35% /gpfs virident1 293937152 1005 Yes No 25172992 ( 9%) 40936480 (14%) virident2 293937152 1006 Yes No 25139200 ( 9%) 40991648 (14%) ssd1 390711360 1007 Yes No 335058944 ( 86%) 6415328 ( 2%) ssd2 390711360 1008 Yes No 335084544 ( 86%) 6396320 ( 2%)
cp_1.sh
(prolog) and cp_3.sh
(epilog) files have been deployed to make jobs run under /tmpdir_slurm/$CREAMCENAME/$JOBID
instead of under $HOME
. #!/bin/bash # cp_1.sh: this is _sourced_ by the CREAM job wrapper export TMPDIR="/gpfs/tmpdir_slurm/${SLURM_SUBMIT_HOST}/${SLURM_JOB_ID}" export MYJOBDIR=${TMPDIR} mkdir -p ${TMPDIR} cd ${TMPDIR} #! /bin/bash # cp_3.sh: this is _sourced_ by the CREAM job wrapper # This gets executed at the end rmdir ${MYJOBDIR}
/shome
( today we use 2 * SUN Thumper ) /shome
as a bunch of 10*2T disks stored inside the future new NetApp and SAS connected, MPxIO managed to an Active/Passive couple of 1u Oracle; the Active node will format the 10*2TB disks as a ZFS, and it will offer the resulting filesystem by NFSv3; if the Active node crashes then I simply have to import the ZFS into the Passive node, and switch on NFSv3. This conf is cheaper than having a couple of 2u NAS replicated ( less disks, cheaper nodes ).
/pnfs
group space usage by running a simple query vs Chimera DB; profiting from a such partitioning one can also produce Ganglia plots and similar; I hope to introduce this change on 8th Nov. In the future CSCS could consider to adopt the same partitioning, being conscious that this raises both complexity and security.
Warning: Can't find topic "".""
|
|