Ubuntu 12.04 LTS construction of high availability distributed MySQL cluster

Recommended for you: Get network issues from WhatsUp Gold. Not end users.

This link is English version

MySQL MySQL Cluster is suitable for high utility, high redundancy in the distributed computing environment. It uses the NDB Cluster storage engine, allows you to run multiple MySQL servers in the 1 Cluster. Sharing architecture by no, system can use cheap hardware, and no special requirement on software and hardware. In addition, because each component has its own memory and disk, there is no single point of failure.

The developer's official website www.oracle.com

In this article, I will describe how to construct a high availability cluster on MySQL Ubuntu 12.04 LTS.

1 SQL Cluster virtual host

To create a fully functional clusters, need at least 3 hosts can finish. One host as a management node, the other two do data node. If the data node offline, the cluster can still work, but once the management node problems, will cause the cluster can not work normally, so in the case of possible, established two management node. In my case, I will use the two node cluster.

This example 5 hosts have been used, the system architecture as shown below

Virtual cluster platform VMware ESXi 4.1
Virtual machine operating system Ubuntu 12.04.3 LTS (Precise Pangolin) 64-bit

All the virtual machines on the same 192.168.67.0/24 network addresses, please come to the specific IP address configured according to your network, before please ensure that all the host network connection settings to normal.

2 installation management node

First of all, the management node we need MySQL cluster installation. We will start with the MySQL-MGM-1, configuration is completed according to step second management node the same, if only to set up a management node, then proceed to the next step configuration.

At the beginning of the configuration, please visit http://www.mysql.com/downloads/cluster/#downloads Confirm the installation file version information. Here I use the MySQL Cluster version 7.3.3.

First, we put the MySQL Cluster installation package to download to the management host. Access to the /usr/src folder and create mysql-mgm directory.

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm

Completed this step, download the latest installation source code on the official website of the MySQL, and extract the package

wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.3/mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz

Enter the unzipped folder, and then move the binary file

cd mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64
cp bin/ndb_mgm /usr/bin
cp bin/ndb_mgmd /usr/bin

Change the permissions on the directory, and can choose to delete the source file

chmod 755 /usr/bin/ndb_mg*
cd /usr/src
rm -rf /usr/src/mysql-mgm

Next, we create the management node configuration file, in the /var/lib/mysql-cluster/ folder, the folder name is config.ini, does not exist, create it

mkdir /var/lib/mysql-cluster

In the config.ini file creation is completed, use you like text editor to edit this file, content similar to this

[NDBD DEFAULT]
 
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[MYSQLD DEFAULT]
 
[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[TCP DEFAULT]
 
# Section for the cluster management node
[NDB_MGMD]
NodeId=1
# IP address of the first management node (this system)
HostName=192.168.67.10
 
[NDB_MGMD]
NodeId=2
#IP address of the second management node
HostName=192.168.67.11
 
# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.67.12
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.67.13
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]

All hosts are defined in this, even if we just installed the first. Please note, management of host nodes need to set the NodeId, while the NDBD node does not need to set.
To complete this step, you can start the management node using the following command

ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/

To complete this step, you can add an entry to the init.d with the following command, to automatically start a program

echo "ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/" > /etc/init.d/ndb_mgmd
chmod 755 /etc/init.d/ndb_mgmd

If all goes well, second management node follows the same steps, and use the same configuration. Please don't change the node configuration file in ID

You can use the ndb_mgm command to verify the management node (operation, only need to enter the ndb_mgm in the terminal), and by type show, start the configuration utility. The NDBD nodes and MySQL nodes in a disconnected state, requires all nodes configured can output the correct state.

3 data node
Process of creating a data node and create a management node is similar. Let's start by creating a MySQL group, add the user Mysql to the MySQL group

groupadd mysql
useradd -g mysql mysql

Enter the /usr/local, the same compression document and use the download and configuration management node, and decompression

cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.3/mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz
tar xvfz mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64.tar.gz

Create a MySQL pointing to unzip the folder (this will be for the DB cluster, so don't delete it!) The soft connection. After creation, you can install the database

ln -s mysql-cluster-gpl-7.3.3-linux-glibc2.5-x86_64 mysql
cd mysql
apt-get install libaio1 libaio-dev
scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data

Modify the file permissions

chown -R root:mysql .
chown -R mysql data

And the management node, we hope that the DataBase engine automatic start, therefore, we need to create the init.d command

cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server

Finally, copy the bin folder to the /usr/bin position, and create a symbolic link, in order to keep to the correct citation

cd /usr/local/mysql/bin
mv * /usr/bin
cd ../
rm -fr /usr/local/mysql/bin
ln -s /usr/bin /usr/local/mysql/bin

The MySQL configuration file does not exist now, so we need to create its own. The files in /etc/, and was named to the my.cnf file. Use you like text editor, and add the following lines

[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.67.10,192.168.67.11
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.67.10,192.168.67.11

Please note, the two management node addresses, separated by commas. If you only have a management node, simply delete the second on the list. Once the my.cnf file has been saved, we need to create the MySQL data folder

mkdir /var/lib/mysql-cluster

After that, we need to initialize cluster and start the service. Only when you start node for the first time, or when the management node of the /var/lib/mysql-cluster/config.ini file is changed, need to initialize

cd /var/lib/mysql-cluster
ndbd –-initial
/etc/init.d/mysql.server start

Next, the installation of MySQL by running the script

/usr/local/mysql/bin/mysql_secure_installation

Finally, we need NDB to start automatically

echo "ndbd" > /etc/init.d/ndbd
chmod 755 /etc/init.d/ndbd

Well, we have completed the first data node configuration, according to the methods and steps of the same to the completion of the second data node configuration

4 verification and testing

If everything is normal, execute the command ndb_mgm in the management of terminal node, the database node then type show. this should see filled tips

root@MYSQL-MGM1:~# ndb_mgm
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3    @192.168.67.12  (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0, *)
id=4    @192.168.67.13  (mysql-5.6.14 ndb-7.3.3, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 2 node(s)
id=1    @192.168.67.10  (mysql-5.6.14 ndb-7.3.3)
id=2    @192.168.67.11  (mysql-5.6.14 ndb-7.3.3)
 
[mysqld(API)]   2 node(s)
id=5    @192.168.67.12  (mysql-5.6.14 ndb-7.3.3)
id=6    @192.168.67.13  (mysql-5.6.14 ndb-7.3.3)

If you could see similar output, to try some basic SQL commands. Log on to the SQL database, and create a new database, table, to verify that the data synchronization. Please note, when you create a database, using the NDBCLUSTER storage engine. If you are using InnoDB, data will not be replicated in the cluster nodes. In the use of the NDBCLUSTER engine, will have some problems, please refer to the official website of MySQL

http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-unsupported.html
http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-limitations-syntax.html

mysql -u root -p
mysql> CREATE DATABASE mysqlclustertest;
mysql> USE mysqlclustertest;
mysql> CREATE TABLE testtable (i INT) ENGINE=NDBCLUSTER;
mysql> INSERT INTO testtable () VALUES (1);
mysql> SELECT * FROM testtable;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.01 sec)

Connect to a database node second, we have a look, get the same output

mysql -u root -p
mysql> USE mysqlclustertest;
mysql> SELECT * FROM testtable;
+------+
| i    |
+------+
|    1 |
+------+
1 row in set (0.01 sec)

You should see the same output. Now, if you insert a new entry in the table, the first node it will be copied back

5 load balancing

In the last part of this article is to install the server load balancing MySQL cluster, load balancing can use mysql-proxy, easy to install, you can also use other services

root@mysql-proxy:~# apt-get install mysql-proxy
root@mysql-proxy:~# mkdir /etc/mysql-proxy
root@mysql-proxy:~# cd /etc/mysql-proxy
root@mysql-proxy:/etc/mysql-proxy# nano mysql-proxy.conf

Add the following lines in the mysql-proxy.conf file

[mysql-proxy]
daemon = true
proxy-address = 192.168.67.14:3306
proxy-skip-profiling = true
keepalive = true
event-threads = 50
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
proxy-backend-addresses = 192.168.67.12:3306,192.168.67.13:3306
proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua

Create the following files for additional options to start automatically /etc/default/mysql-proxy

ENABLED="true"
OPTIONS="--defaults-file=/etc/mysql-proxy.conf --plugins=proxy"

Then, you can use the following command to start the mysql-proxy

/etc/init.d/mysql-proxy start/stop/status

After the completion, you should be able to connect to the MySQL server using the proxy address. Remember this job, you need to create a new user with specific subnets connected to it. Also need to add the binding address for the MySQL server in the my.cnf file

SQL users don't copy, so the same user alone is added to the database node in all. In the data node to log on to SQL shell, execute the following command

CREATE USER 'newuser'@'192.168.67.%' IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
SELECT * FROM mysql.user;

Changes of newuser, IP and password, according to your requirements. % as a wildcard character, which acts on the network IP address, which allow remote connections to the database node. Please remember that all other database to add the same user nodes in the cluster with the same configuration.


Write this article made reference to this article MySQL NDB Cluster setup on Ubuntu 12.04 LTS, do some changes.

If there is wrong, please correct me, this article English version of my website www.mrxuri.com, links are welcome to visit the.

Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download

Posted by Bard at December 12, 2013 - 5:00 AM