2010/11/01
1 Nov, 2010

Setting up a mysql cluster for use with WSO2 products

  • damitha nanda mangala kumarage
  • senior tech lead - WSO2

Contents

Introduction

The aim of this document is to provide knowledge on how to configure a mysql cluster and deploy it for a particular scenario. We use one management node, two data nodes and two sql nodes in this particular deployment. One sql node will have WSO2 ESB running in the same instance and the other sql node will have WSO2 BAM running in the same instance. I use 5 vmware instances for the deployment of my test cluster setup.

Environment

I created 5 vmware instances in my desktop machine in order to setup the cluster test environment.

The setup includes one management node and two data nodes and two sql nodes. In one sql node I have WSO2 ESB running and in the other sql node I have WSO2 BAM running.

First let's briefly get clarified with the following terminology

  • sql node: This is a mysql server node where mysqld daemon runs. However note that this binary is not interchangeable with the mysqld daemon in a non-clustering version of mysql. This is also called API node because this provide an API for client applications to interacts with mysql server. There could be more than one node running sql node. However note that load balancing between these sql nodes is the responsibility of the application program. Also notice that when mysqld daemon is running in an clustering environment you cannot execute sql commands like CREATE TABLE, ALTER TABLE before the cluster is started.
  • data node: This is a node where data resides. The ndbd daemon is running in this node. There could be more than two nodes running as data nodes.
  • Management node: This node handle management tasks like starting, restarting, shutting down the cluster. It can also down or up individual data nodes. Without management node data nodes are unusable. The daemon running management nodes is ndb_mgmt.
  • cluster node: Any of the above mentioned nodes can generally be called as a cluster node.

Installation

mysql-cluster-gpl-7.1.3-linux-x86_64-glibc23.tar.gz downloadable from mysql site. Although our separate nodes of the cluster need specific software to run the corresponding daemons which are avaiable from mysql site as separate software packages, the tar.gz pack mentioned here contains them all. So we install this pack in all of our cluster nodes in spite of the fact that we may use only part of the software within that cluster. However from my experience with installing the cluster on Redhat linux I'll recommend to install from rpm's instead of tar.gz. Note that rpm's come in separate bundles for management node, sql nodes etc.

    Installing SQL and DATA nodes

     

    • Create a mysql user and group.
      group add mysql
      useradd -g mysql mysql
    • unpack the tar.gz bundle.
      tar -zxf mysql-cluster-gpl-7.1.3-linux-x86_64-glibc23.tar.gz
    • Make a link in the /usr/local/mysql folder.
      ln -s <your cluster unpack folder>mysql-cluster-gpl-7.1.3-linux-x86_64-glibc23 /usr/local/mysql
    • Add /usr/local/mysql/bin to your path
    • Run the script to create database.
      cd /usr/local/mysql
      scripts/mysql_install_db --user=mysql

      Note: Sometimes you may need to provide basedir as well

      scripts/mysql_install_db --basedir=/usr/local/mysql-cluster --user=mysql
    • Set the necessary permissions. /usr/local/mysql/data is the data directory of the data node.
      chown -R root .
      chown -R mysql data
      chgrp -R mysql .
    • Copy mysql startup script, make it exeutable and set it to start when os boot up
      cp support-files/mysql.server /etc/init.d/
      chmod +x /etc/init.d/mysql.server
      update-rc.d mysql.server defaults

      Note: In red hat linux you need to execute following instead.

      cp support-files/mysql.server /etc/rc.d/init.d/
      chmod +x /etc/rc.d/init.d/mysql.server
      chkconfig --add mysql.server

     

     

    Installing Management nodes

    In the management node we run ndb_mgmd daemon. We also install the cluster management client (ndb_mgm) in this node. ndb_mgm is a command line client tool to execute commands on management node.

     

    • unpack the tar.gz bundle.
      tar -zxf mysql-cluster-gpl-7.1.3-linux-x86_64-glibc23.tar.gz
    • Make a link in the /usr/local/mysql folder.
      ln -s <your cluster unpack folder>mysql-cluster-gpl-7.1.3-linux-x86_64-glibc23 /usr/local/mysql
    • Copy ndb_mgmd and ndb_mgm
      cp /usr/local/mysql/bin/ndb_mgm* /usr/local/bin
    • Make the above file executable
      cd /usr/local/bin
      chmod +x ndb_mgm*

Now we have installed the cluster nodes in the machine instances. Lets now move into the configuration part.

Configuration

We need to configure 5 configuration file for each of our five node instances.

Configuring SQL and Data nodes

 

vi /etc/my.cnf
 
[mysqld]                                       
# Options for mysqld process:                           
ndbcluster                # run NDB storage engine           
ndb-connectstring=192.168.0.10    # location of management server       
[mysql_cluster]                                   
# Options for ndbd process:                               
ndb-connectstring=192.168.0.10    # location of management server

Configuring Management nodes

We will store the configuration file for management node in /var/lib/mysql-cluster

 

mkdir /var/lib/mysql-cluster

Create a sample configuration file called config.ini here

 

 

[ndbd default]
# Options affecting ndbd processes on all data nodes:

NoOfReplicas=2     # Number of replicas
DataMemory=80M      # Number of replicas
IndexMemory=18M    # How much memory to allocate for index storage

# For DataMemory and IndexMemory, we have used the

# default values. This is because we don't expand these
testing databases. Check with the memory needed for registry and user_mgmt databases
in real environments.
[tcp default]
# TCP/IP options:
portnumber=2202    # This the default; however, you can use any

# port that is free for all the hosts in the cluster

# Note: It is recommended that you do not specify the port

# number at all and simply allow the default value to be used

# instead
[ndb_mgmd]
# Management process options:
hostname=192.168.0.10        # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd]
# Options for data node "A": # (one [ndbd] section per data node)
hostname=192.168.0.30        # Hostname or IP address
datadir=/usr/local/mysql/data    # Directory for this data node's data files
[ndbd]
# Options for data node "B":
hostname=192.168.0.40        # Hostname or IP address
datadir=/usr/local/mysql/data    # Directory for this data node's data files
[mysqld]
# SQL node options:
hostname=192.168.0.20        # Hostname or IP address
                    # (additional mysqld connections can be
                    # specified for this node for various
                    # purposes such as running ndb_restore)
[mysqld]
# SQL node options:
hostname=192.168.0.50

 

Initial test of our cluster

We need to start our cluster in following order

  • first start the management node.
    ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster

    According to mysql official cluster documentation you need to run

    ndb_mgmd -f /var/lib/mysql-cluster/config.ini

    But if your configuration directory is not /usr/local/mysql/mysql-cluster you have to run the command suggested before.

     

  • On each of the data nodes instances start the ndbd process
    /usr/local/mysql/bin/ndbd
  • Now start the mysql server on each sql node
    /etc/init.d/mysql.server start

 

Now you should have an mysql cluster running on your test setup. When running the ndb_mgm management client you should see an output something like this

 

ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2     @192.168.0.30 (Version: 5.1.47-ndb-6.3.39, Nodegroup: 0, Master)
id=3     @192.168.0.40 (Version: 5.1.47-ndb-6.3.39, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1     @192.168.0.10 (Version: 5.1.47-ndb-6.3.39)
[mysqld(API)] 2 node(s)
id=4    @192.168.0.20 (Version: 5.1.47-ndb-6.3.39)
id=5    @192.168.0.50 (Version: 5.1.47-ndb-6.3.39)

 

Now you can test shutdown and restart of the cluster using following commands.

To shutdown

ndb_mgm -e shutdown

Note that you are running ndb_mgm commands in the management server node, because we installed our ndb_mgm program there.

 

Now we test restarting the cluster

ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster

 

Then on each of data nodes

ndbd

 

Now after verifying that both data nodes have started, on the sql nodes run

mysqld_safe &

 

Later if you change configuration files make sure to run the ndb_mgmd node with --reload option.

For more information on MySQL cluster configuration see MySQL cluster documentation [1].

Sample deployment using WSO2 products

We will configure WSO2 ESB and WSO2 BAM to run with our mysql cluster.

First we need to import the two databases into the mysql cluster. I am not going to elaborate this process here to keep brevity in this document. It is enough to say that I used mysqldump to take a dump of the sufficiently small existing databases of registry and userstore and imported them into the cluster using normal mysql import procedures. Before doing the imports I did the following changes to the database scripts.

 

  • Make every table to use NDB engine
  • Removed all foreign key constraints. Ideally I should implement database triggers to enforce data integrity. But for this deployment I bypassed that process.
  • Removed indexes.

 

Now let's install WSO2 applications into the sql nodes. In one sql node let's install WSO2 ESB and in the other sql node let's install WSO2 BAM server.

To proceed download WSO2 BAM server product from [2] and WSO2 ESB server from [3]. Unarchive them into appropriate directories and configure them to point to the same registry and userstore database by editing the registry.xml and user-mgt.xml files respectively .

Edit repository/conf/registry.xml in both products as following

 

<dbConfig name="wso2registry">
    <url>jdbc:mysql://192.168.0.20:3306/registry?autoReconnect=true</url>
    <userName>registry</userName>
    <password>password123</password>
    <driverName>com.mysql.jdbc.Driver</driverName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <minIdle>5</minIdle>
    <validationQuery>SELECT 1</validationQuery>
</dbConfig>

 

And configure usr-mgt.xml in both products as following

 

jdbc:mysql://192.168.0.20:3306/userstore?autoReconnect=true
    userstore
    password123
    com.mysql.jdbc.Driver
    50
    60000
    5
    org.wso2.carbon.user.core.config.multitenancy.SimpleRealmConfigBuilder
    SELECT 1

 

Note that ip should change according as where the corresponding sql node installed.

Now start the servers from the corresponding bin directories by executing.

./wso2server.sh

Now you can log into admin console of the bam server and monitor your esb seamlessly.

Some important notes when designing a mysql cluster

Following are some facts that should be kept in mind when designing a mysql cluster to be used with WSO2 products

  • To minimize network related overhead use an gigabit switch to wire the cluster preferrbly behine a firewall. It is not recommended to host cluster nodes in the internet.
  • If using virtual environment it is recommened to use dedicated phisical storage for each data node.
  • In this document only the in-memory cluster storage is assumed. Managing cluster disk storage is important when your database grows. Here is good documentation from mysql on the subject [4]
  • There is a nice perl script[5](which is in your cluster download bin directory) which when executed against a running mysql server would tell you how much space you need if you were to convert the databases on that server into a NDB storage engine.

Reference

[1]https://dev.mysql.com/doc/mysql-cluster-excerpt/5.1/en/mysql-cluster-mul...

[2]https://wso2.com/products/business-activity-monitor

[3]https://wso2.com/products/enterprise-service-bus

[4]https://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-disk-data-objects....

[5]https://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-programs-ndb-size-...

 

Author: Damitha Kumarage

Technical Lead, WSO2 Inc.

 

 

 

 

About Author

  • damitha nanda mangala kumarage
  • senior tech lead
  • wso2