<!-- keep this as a security measure: #uncomment if the subject should only be modifiable by the listed groups # * Set ALLOWTOPICCHANGE = Main.TWikiAdminGroup,Main.CMSAdminGroup # * Set ALLOWTOPICRENAME = Main.TWikiAdminGroup,Main.CMSAdminGroup #uncomment this if you want the page only be viewable by the listed groups # * Set ALLOWTOPICVIEW = Main.TWikiAdminGroup,Main.CMSAdminGroup,Main.CMSAdminReaderGroup --> ---+ ZFS snapshots backup configuration Backup script */opt/zfssnap/zfssnap* runs on t3nfs01 server to copy snapshots from data01 to t3nfs02. This script used the following file */opt/zfssnap/zfssnap-day* with content like ==zfssnap-day-20170803-012839== . Data from /opt/zfssnap/zfssnap-day is a Reference/Counter Point from which incremental backup should start and determine the last snapshot that was sent to the backup server. There should be a common (latest) snapshot both on original home and backup servers. The content of the /opt/zfssnap/zfssnap-day overrided when script is completed. Without any previous backup one can create initial backup with the following command: <pre> zfs send -eRv data01@zfssnap-day-latest | ssh -o ForwardX11=no zfs@t3nfs02.psi.ch -c arcfour,blowfish-cbc -o Compression=no "sudo /usr/sbin/zfs recv -dvF data01/t3nfs01_data01" </pre> Coping 6TB takes about 48 hours. When !data01@zfssnap-day-latest created both on t3nfs01 and t3nfs02 put "data01@zfssnap-day-latest" content to /opt/zfssnap/zfssnap-day and start cron job from *t3nfs01:/etc/cron.daily/zfssnap* <pre> PERIOD="day" MAXSNAPS=2 BACKUPSERVER="t3nfs02.psi.ch" LOG="/var/log/zfssnap.$PERIOD.log" echo "" >> $LOG echo "- new run -" >> $LOG /opt/zfssnap/zfssnap $PERIOD $MAXSNAPS $BACKUPSERVER >>$LOG 2>&1 echo "- end run -" >> $LOG echo "" >> $LOG </pre> ---+++ t3nfs02 server configuration steps: 1. zfs pool data01 created as nested RAIDZ1-0 device from 12*3TB disks associated with HP Smart Array Controller P440 (enabled in HBA-mode): <pre> # zpool create data01 raidz1 /dev/sda /dev/sdb /dev/sdc raidz1 /dev/sdd /dev/sde /dev/sdf raidz1 /dev/sdg /dev/sdh /dev/sdi raidz1 /dev/sdj /dev/sdk /dev/sdl </pre> 2. create a dataset t3nfs01_data01: <pre> # zfs create -o mountpoint=/zfs data01/t3nfs01_data01 </pre> 3. add zfs user with home directory /opt/zfssnap <pre> # groupadd --gid 337 zfs # useradd zfs -u 337 -g 337 -s /bin/bash -d /opt/zfssnap # chown -R :zfs /opt/zfssnap/ # chown -R zfs:zfs .ssh </pre> 4. add a key from *t3nfs01:/root/.ssh/id_rsa.pub* to */opt/zfssnap/.ssh/authorized_keys* 5. */etc/security/access.conf* : <pre> + : zfs : t3nfs01.psi.ch </pre> 6. */etc/hosts.allow* <pre> sshd: t3admin01.psi.ch t3admin02.psi.ch wmgt01.psi.ch wmgt02.psi.ch localhost t3nfs01.psi.ch </pre> 7. configure sudo </br> in */etc/sudoers* : <pre> #includedir /etc/sudoers.d Defaults:zfs !requiretty </pre> and <!-- in */etc/sudoers.d/zfs* --> add lines corresponding commands from script /opt/zfssnap/zfssnap <pre> ## Cmnd alias specification Cmnd_Alias C_ZFS = \ /usr/sbin/zfs list, /usr/sbin/zfs list *, /usr/sbin/zfs list -H -t snapshot * \ /usr/sbin/zfs recv -dv data01/t3nfs01_data01,\ /usr/sbin/zfs recv -dvF data01/t3nfs01_data01 %zfs ALL=NOPASSWD: C_ZFS </pre> in.NinaLoktionova - 2019-08-16
This topic: CmsTier3
>
WebHome
>
WebPreferences
>
AdminArea2019
>
ZFSBackup
Topic revision: r1 - 2019-08-16 - NinaLoktionova
Copyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki?
Send feedback