Tags:
mysql1Add my vote for this tag replication1Add my vote for this tag slurm1Add my vote for this tag create new tag
view all tags

MySQL replication

Usage

  • slurm1 and slurm2 make use of master master replication to provide high availability.

Requirements

Semisynchronous replication requires MySQL 5.5 or newer. SL6 ships with MySQL 5.1.x as such you will have do download the newer RPMs from the MySQL website.

These RPMs have been copied to the Phoenix repo and are available through yum. Note you will have to first install MySQL-shared-compat package as otherwise you will receive an error about conflicts.

Setup

  • slurm1 and slurm2 are both masters
    • Both can be written to and changes will be replicated to the other node.

  • slurm1 and slurm2 are both semisynchronous masters and slaves
    • When a transaction is performed on either node the change is replicated to the other machine

Config

Install

Download version 5.5 or newer RPMs from http://dev.mysql.com/downloads/mysql/ (version 5.6 is available in the phoenix repo)

Install compatibility libs

yum install MySQL-shared-compat

Install the client and server

yum install MySQL-client MySQL-server
</verba
%ICON{TODO}%tim>

The first thing we have to do is change the password. The installation of MySQL-server will create the following file with a random password to log into MySQL with

<verbatim>
/root/.mysql_secret
</verbatim>

Log into MySQL and change the password

<verbatim>
mysql -p
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('newpwd');
</verbatim>

Open another terminal and ensure you are able to log in with the new password.

Now lets edit the config to define our replication. The below is from slurm2, on slurm1 the only values that would change are the server-id and the bind-address, these would be 2 and 148.187.64.42 respectively.

<verbatim>
vim /etc/my.cnf

   [mysqld]
   datadir=/var/lib/mysql
   socket=/var/lib/mysql/mysql.sock
   user=mysql
   # Disabling symbolic-links is recommended to prevent assorted security risks
   symbolic-links=0

   # Master-Slave config
   bind-address=148.187.64.43
   server-id=1                     # master
   log_bin=/var/log/mysql/mysql-bin.log
   binlog_do_db=slurm_acct_db      # the slurmdbd database
   binlog_ignore_db = mysql
   binlog_ignore_db = test

   log-bin=mysql-bin

   [mysqld_safe]
   log-error=/var/log/mysqld.log 
   pid-file=/var/run/mysqld/mysqld.pid

</verbatim>

Restart MySQL for the changes to take affect

<verba
%ICON{TODO}%tim>
service mysql restart

Master slave replication

Now we need to log in and create the database if it does not exist. Note I upgraded from MySQL 5.1 to 5.6 after the slurm RPMs had been installed.

WARNING There is a bug in 5.6 that breaks replication when a grant command is issued with incorrect syntax.

mysql -p
CREATE DATABASE slurm_acct_db;

# Note replace 'password' in the below command with the actual password

GRANT REPLICATION SLAVE ON *.* TO 'root'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
show master status;ver for the primary slurmctld if it ever fails. The backup controller should be hosted on a node different from the node hosting the primary slurmctld. However, both hosts should mount a c

+------------------+----------+---------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB  | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+---------------+------------------+-------------------+
| mysql-bin.000018 |      120 | slurm_acct_db | mysql,test       |                   |
+------------------+----------+---------------+------------------+-------------------+
1 row in set (0.00 sec)

Now we need to install the packages on the other server, edited the my.cnf and set the password as previously described.

Lets now set up this server as a slave, first create the database if it does not exist.

mysql -p
CREATE DATABASE slurm_acct_db;

Define the other machine as the master, again change 'password' to the actual password. Note the MASTER_LOG_FILE and MASTER_LOG_POS values are taken from the 'show master status;' output on the other node.

CHANGE MASTER TO MASTER_HOST='148.187.64.43', MASTER_USER='root', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000018', MASTER_LOG_POS=120;

Now we have defined our master start the slave and ensure that the values are correct. Master_Host, Master_Log_File and Read_Master_Log_Pos are the entries of main interest.

START SLAVE;
show slave status \G

Before enabling the other node as a master and defining slave replication in the other direction lets test the current replication.

# On slurm2

use slurm_acct_db;
create table testmaster21 (mid int(11) auto_increment, PRIMARY KEY (mid)) Engine=MyISAM;

# On slurm1 check the table exists

use slurm_acct_db;
show tables;

Master master replication

If all is good define master slave replication in the other direction.

On slurm1 display the statussacctmgr ping ver for the primary slurmctld if it ever fails. The backup controller should be hosted on a node different from the node hosting the primary slurmctld. However, both hosts should mount a c

show master status;
+------------------+----------+---------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB  | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+---------------+------------------+-------------------+
| mysql-bin.000010 |      120 | slurm_acct_db | mysql,test       |                   |
+------------------+----------+---------------+------------------+-------------------+
1 row in set (0.00 sec)

Grant replication permission on the slave

GRANT REPLICATION SLAVE ON *.* TO 'root'@'%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
show master status;

On slurm2 define slurm1 as a master, again using the values from the other node for MASTER_LOG_FILE and MASTER_LOG_POS

CHANGE MASTER TO MASTER_HOST='148.187.64.42', MASTER_USER='root', MASTER_PASSWORD='password', MASTER_LOG_FILE='mysql-bin.000010', MASTER_LOG_POS=120;

Again start the slave and check the status

START SLAVE;
show slave status \G

Ensure replication is working.

# On slurm1

use slurm_acct_db;
DROP TABLE testmaster21;

# On slurm2 check the table no longer exists

use slurm_acct_db;
show tables;

Enabling semisynchronous replication

WARNING MySQL 5.6 does not work with slurmdb at time of writting (19/08/2013)

Semisynchronous replication is provided by plugins. As both slurm1 and slurm2 are masters and slaves to one another install both master and slave plugins

INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';

By default these plugins are not automatically enabled so enable them on both systems.

SET GLOBAL rpl_semi_sync_master_enabled = 1;
SET GLOBAL rpl_semi_sync_slave_enabled = 1;

The replication between master and slave will need to be restarted for the nodes to be recognised as semisynchronous slaves.

STOP SLAVE IO_THREAD; START SLAVE IO_THREAD;

To ensure the semisynchronous plugin is enabled across service restarts of mysql add the following lines to each severs under the [mysqld] section of /etc/my.cnf

vim /etc/my.cnf

   rpl_semi_sync_master_enabled=1
   rpl_semi_sync_master_timeout=1000 # 1 second
   rpl_semi_sync_slave_enabled=1

Testing

TBA

Monitoring

There is a check_mysql plugin available that makes our lives easier here's how to implement it.

  • First install the check

yum install nagios-plugins-mysql.x86_64 --enablerepo=epel

  • We now need to create a unprivileged user to log into the database to perform these checks with.

#mysql shell
#log in as root 
#change password to an actual password

GRANT SELECT, REPLICATION CLIENT  ON *.* TO 'nagios'@'localhost' IDENTIFIED BY 'password'; 

  • Then update the nrpe and services files in cfengine and we're good to go.. Below is an example, the -S flag checks the slave status.

#Change password to the real password
command[check_mysql]=/usr/libexec/nagios-plugins/check_mysql -u nagios -p password -S -d slurm_acct_db 

HA testing

Yes / Done Tested killing MySQL processes with -s 9 signal on primary DB host, sacctmgr commands continued to return correct info on both slurm1 and slurm2

Yes / Done Submitted a job, waited until it was running stopped MySQL gracefully on primary DB host job finished ok, other squeue returned successfully throughout

No Brought down public interface on slurm2 after job started to run, caused sacctmgr commands to fail. It appears the slurmdbd deamon is required to be reachable in order for fail over to occur.

  • Killed slurmctrld (init script would not stop the service), started again and sacctmgr and sinfo returned, however job did not complete

No Brought down public interface on slurm1 to test fail over of slurmctld: Note is appears StateSaveLocation in the slurm.conf needs to refer to a shared filesystem location for this to work.

  • squeue and sinfo ran but took a longer time to return

  • when a job was submitted it did not complete

  • Yes / Done Added shared location for the StateSaveLocation using the NAS mounted on /slurm. Jobs completed and I could submit jobs with the primary controller down but sinfo and sacctmgr were still slow to return.

Issues

  • Upon initial install mysql failed to start due to the fact /var/log/mysql was not owned by the mysql user.

  • If the slave status shows that it is having trouble connecting to the master ensure the firewall is not blocking communication on port 3306 and grant permissions have been given. If it is open the port and restart the slave (STOP SLAVE; START SLAVE;)

  • The syntax slurmdb uses when initially creating the tables in MySQL is not valid in MySQL 5.6 (works in 5.1). As such we cannot take advantage of semi synchronous replication at this time. TBA bug report with SchedMD.

Further reading

-- GeorgeBrown - 2013-08-12
Topic attachments
I Attachment History Action Size Date Who Comment
JPEGjpg slurm_replication1.jpg r1 manage 18.0 K 2013-08-13 - 09:23 GeorgeBrown slurm replication
Edit | Attach | Watch | Print version | History: r13 < r12 < r11 < r10 < r9 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r13 - 2013-09-05 - GeorgeBrown
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback