Tags:
create new tag
view all tags

ZFS snapshots backup configuration - OBSOLETE

  • Note by Derek at cluster takeover: *This page contains obsolete information, since for the last months t3nfs01 had been down, and t3nfs02 had been configured as the ZFS data server (mostly holding the /work area). Since that area had increased beyond the storage limits of t3nfs01 and therefore no backup was possible any more, Nina had decided to leave t3nfs02 active, and to keep t3nfs01 shut down.
  • The /opt/zfssnap/zfssnap script is still triggered through a daily cron job, but it just does the snapshots and no longer receives a third argument for a backup server

Backup script /opt/zfssnap/zfssnap runs on t3nfs01 server to copy snapshots from data01 to t3nfs02. This script used the following file /opt/zfssnap/zfssnap-day with content like zfssnap-day-20170803-012839 . Data from /opt/zfssnap/zfssnap-day is a Reference/Counter Point from which incremental backup should start and determine the last snapshot that was sent to the backup server. There should be a common (latest) snapshot both on original home and backup servers. The content of the /opt/zfssnap/zfssnap-day overrided when script is completed.

Without any previous backup one can create initial backup with the following command:

zfs  send -eRv data01@zfssnap-day-latest  | ssh -o ForwardX11=no zfs@t3nfs02.psi.ch -c arcfour,blowfish-cbc -o Compression=no "sudo /usr/sbin/zfs recv -dvF data01/t3nfs01_data01"

Coping 6TB takes about 48 hours.

When data01@zfssnap-day-latest created both on t3nfs01 and t3nfs02 put "data01@zfssnap-day-latest" content to /opt/zfssnap/zfssnap-day and start cron job from t3nfs01:/etc/cron.daily/zfssnap

PERIOD="day"
MAXSNAPS=2  
BACKUPSERVER="t3nfs02.psi.ch"
LOG="/var/log/zfssnap.$PERIOD.log"

echo ""            >> $LOG
echo "- new run -" >> $LOG

/opt/zfssnap/zfssnap $PERIOD $MAXSNAPS $BACKUPSERVER >>$LOG 2>&1

echo "- end run -" >> $LOG
echo ""            >> $LOG

t3nfs02 server configuration steps:

1. zfs pool data01 created as nested RAIDZ1-0 device from 12*3TB disks associated with HP Smart Array Controller P440 (enabled in HBA-mode):

# zpool create  data01  raidz1 /dev/sda /dev/sdb /dev/sdc   raidz1   /dev/sdd /dev/sde /dev/sdf   raidz1    /dev/sdg /dev/sdh /dev/sdi    raidz1    /dev/sdj /dev/sdk /dev/sdl

2. create a dataset t3nfs01_data01:

# zfs create -o mountpoint=/zfs data01/t3nfs01_data01

3. add zfs user with home directory /opt/zfssnap

# groupadd   --gid 337 zfs
# useradd  zfs  -u 337 -g 337 -s /bin/bash -d /opt/zfssnap
# chown -R :zfs /opt/zfssnap/
# chown -R zfs:zfs .ssh     
4. add a key from t3nfs01:/root/.ssh/id_rsa.pub to /opt/zfssnap/.ssh/authorized_keys

5. /etc/security/access.conf :

+ : zfs  : t3nfs01.psi.ch
6. /etc/hosts.allow
sshd: t3admin01.psi.ch t3admin02.psi.ch  wmgt01.psi.ch wmgt02.psi.ch localhost t3nfs01.psi.ch
7. configure sudo
in /etc/sudoers :
#includedir /etc/sudoers.d
Defaults:zfs !requiretty
and add lines corresponding commands from script /opt/zfssnap/zfssnap
## Cmnd alias specification
Cmnd_Alias C_ZFS = \
/usr/sbin/zfs list,   /usr/sbin/zfs list *, /usr/sbin/zfs list -H -t snapshot *   \
/usr/sbin/zfs recv -dv data01/t3nfs01_data01,\
 /usr/sbin/zfs recv -dvF data01/t3nfs01_data01
%zfs ALL=NOPASSWD: C_ZFS

in.NinaLoktionova - 2019-08-16

Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r2 - 2020-06-09 - DerekFeichtinger
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback