Quantcast
Channel: Severalnines - Galera Cluster
Viewing all 210 articles
Browse latest View live

mysqldump or Percona XtraBackup? Backup Strategies for MySQL Galera Cluster

$
0
0

Coming up with a backup strategy that does not affect database performance or lock your tables can be tricky. How do you backup your production database cluster without affecting your applications? Should you use mysqldump or Percona Xtrabackup? When should you use incremental backups? Where do you store the backups? In this blog post, we will cover some of the common backup methods for Galera Cluster for MySQL/MariaDB, and how you can get the most out of these. 

 

Backup Method

There are various ways to backup your Galera Cluster data:

  • xtrabackup (full physical backup)
  • xtrabackup (incremental physical backup)
  • mysqldump (logical backup)
  • binary logging 
  • replication slave

Xtrabackup (full backup)

Xtrabackup is an open-source MySQL hot backup utility from Percona. It is a combination of xtrabackup (built in C) and innobackupex (built on Perl) and can back up data from InnoDB, XtraDB and MyISAM tables. 

Xtrabackup does not lock your database during the backup process. For large databases (100+ GB), it provides much better restoration time as compared to mysqldump. The restoration process involves preparing MySQL data from the backup files before replacing or switching it with the current data directory on the target node. However, the restoration process is not very straightforward. We have covered some backup best practices and an example of how to restore with xtrabackup in this blog post.

ClusterControl allows you to schedule backups using Xtrabackup and mysqldump. It can store the backup files locally on the node where the backup is taken, or the backup files can also be streamed to the controller node and compressed on-the-fly.

Xtrabackup (incremental backup)

If you do not want to backup your entire database every single time, then you should look into incremental backup. Xtrabackup supports incremental backup where it can copy the data that has changed since the last backup. You can have many incremental backups between each full backup. For every incremental backup, you need information on the last one you did so it knows where to start the new one. Details on this works can be found here.

ClusterControl manages incremental backups, and groups the combination of full and incremental backups in a backup set. A backup set has an ID based on the latest full backup ID. All incremental backups after a full backup will be part of the same backup set, as seen in the screenshot below:

Note that without a full backup to start from, the incremental backups are useless.

mysqldump

This is probably the most popular backup method for MySQL. mysqldump is the perfect tool when migrating data between different versions or storage engines of MySQL, or when you need to change something in the text file dump before restoring it. Use it in conjunction with xtrabackup to give you more recovery options.

ClusterControl performs mysqldump against all databases by using the --single-transaction option. When using --single-transaction you will get a consistent backup for the Innodb tables without making database read only. So --single-transaction does not work if you have MyISAM tables (so these would be inconsistent). However when using Galera Cluster, all tables should be InnoDB (except the mysql system tables, but that is okay).

This means that using mysqldump is safe, but it has drawbacks. mysqldump will do, for each database and for each table, "SELECT * FROM <table>" and write the content to the mysqldump file. The problem with the "SELECT * FROM .. " is if you have tables (and a data set/DB size) that does not fit in the innodb buffer pool. The active data set (that your application uses) will take a hit when the SELECT * FROM .. will load up data from disk, store the pages in the InnoDB buffer pool, and to do so, expunge pages part of the active data set from the InnoDB buffer pool, and put them on disk.

Hence you will get a performance degradation on that node, since the active data set is no longer in RAM but on DISK (if the InnoDB buffer pool is not large enough to fit the entire database).

If you want to avoid that, then use xtrabackup. Nevertheless, it is common to use --single-transaction and it does not block the nodes (except for a very short time when a START TRANSACTION is made, but that can be neglected). And yes, all nodes can still perform read and writes. But you will take a performance hit in the cluster while mysqldump is running - since CPU, DISK and RAM are used by the mysqldump process. A Galera Cluster is as fast the slowest running node.

Binary Logs

Binary logs can be used as incremental backups (particularly for mysqldump) as well for point-in-time recovery. ClusterControl will automatically perform mysqldump with --master-data=2 if it detects binary logging is enabled on the particular node. mysqldump will have a statement about binary log file and position in the dump file. Binary logs can eat up a significant amount of disk space so setting up an appropriate expire_log_days value is important. It is mandatory to enable log_slave_updates on a Galera node so events originating from the other cluster nodes are captured when local slave threads apply writesets.

To perform a point-in-time recovery of a Galera Cluster, please refer to this blog post.

Replication Slave

From MySQL 5.6 (or the equivalent MariaDB Cluster 10), it is possible to have a replication slave from a Galera cluster with GTID auto positioning. One approach is to run backups and ad-hoc analytical reporting on the slave, and therefore offload your Galera cluster. You can ensure the data integrity of the replicated data by performing regular checksums using, e.g., the Percona toolkit pt-table-checksum.

Setting up asynchronous replication from Galera cluster to standalone MySQL server is covered in this blog post

 

Backup Locations

If you are using ClusterControl, you have a few options where you can store your backup. 

Storing on Controller

Storing backups on the controller node provides a centralized backup storage outside of your Galera cluster. It might make sense to not use extra disk on your Galera nodes for this. You can also verify the correctness of the backups from the controller node. Make sure the controller has enough disk space, or else, mount an external storage device.

Storing on DB Node

You can store the backup files on the node where the backup is performed. This is a great approach if you have a dedicated Galera node as backup server, or if the backup directory is mounted on e.g a SAN. Storing the files on more than one DB node for redundancy purposes. 

Storing in Cloud

ClusterControl has integration with AWS S3 and Glacier services, where you can upload and retrieve your backups using these services. This requires extra configurations on ClusterControl > Service Providers > AWS Credentials. Having your backups off-site, in the cloud, can be a good way to secure your backups against disaster. 

Details on this can be found in the ClusterControl User Guide under Online Storage section. You can also easily transfer backups to remote locations using BitTorrent Sync

 

How to determine which backup strategy to use?

Your backup strategy will depend factors ranging from database size, growth and workload to hardware resources and non-functional requirements (e.g. need to do point-in-time recovery). 

Recovery

Determine whether you need point-in-time recovery and enable binary logging on one or more Galera nodes. Running a server with binary logging enabled has an impact on performance. Binary logging also allows you to set up replication to a slave, which can be used for other purposes.

Database Size

ClusterControl tracks the growth of your databases, so you can see how your databases have grown over time. If your database fits in the InnoDB buffer pool, then mysqldump should not have a negative impact on the cluster performance.

 

Database Usage

To determine frequency of backups, use the Cluster Load graph to determine the write load over the past weeks or months. You can use that to calculate the max amount of data that could potentially be lost if you lost your cluster and had to restore from the last backup. 

 

Backup policy

A backup policy could be as follows:

  • Full backup (xtrabackup) every Sunday at 03:00
  • Incremental backup (xtrabackup) Monday to Saturday at 03:00

mysqldump are also convenient to have, as they are easily transportable to other servers. 

Make sure you backup your data before making significant changes, e.g, schema, software or hardware changes. In conjunction with binary logging then you will avoid data loss and you can at least revert to the position before the failed change (e.g an erroneous drop table).  

If using binary logs, we recommend you set expire_log_days=X+1 in my.cnf, where X are the number of days between full backups.

 

Galera backup strategy

Use your monitoring data to understand the workload, and then plan your backup strategy. The following flowchart helps illustrate the decision process:

 

The flowchart above is work in progress, so any suggestions on improvements are very welcome. For instance, xtrabackup can be used in most circumstances. 

Your database size and usage usually grows with time, and having a good backup strategy also becomes more important. So if you have not given it careful thought, perhaps now is a good time. 

Blog category:


Benchmark of Load Balancers for MySQL/MariaDB Galera Cluster

$
0
0

When running a MariaDB Cluster or Percona XtraDB Cluster, it is common to use a load balancer to distribute client requests across multiple database nodes. Load balancing SQL requests aims to optimize the usage of the database nodes, maximize throughput, minimize response times and avoid overload of the Galera nodes. 

In this blog post, we’ll take a look at four different open source load balancers, and do a quick benchmark to compare performance:

  • HAproxy by HAproxy Technologies
  • IPVS by Linux Virtual Server Project
  • Galera Load Balancer by Codership
  • mysqlproxy by Oracle (alpha)

Note that there are other options out there, e.g. MaxScale from the MariaDB team, that we plan to cover in a future post.

When to Load Balance Galera Requests

Although Galera Cluster does multi-master synchronous replication, you should really read/write on all database nodes provided that you comply with the following:

  • Table you are writing to is not a hotspot table
  • All tables must have an explicit primary key defined
  • All tables must run under InnoDB storage engine
  • Huge writesets must run in batch, for example it is recommended to run 100 times of 1000 row inserts rather than one time of 100000 row inserts
  • Your application can tolerate non-sequential auto-increment values.

If above requirements are met, you can have a pretty safe multi-node write cluster without the need to split the writes on multiple masters (sharding) as  you would need to do in a MySQL Replication setup because of slave lag problems. Furthermore, having load balancers between the application and database layer can be very convenient where load balancers may assume that all nodes are equal and no extra configuration such as read/write splitting and promoting a slave node to a master are required.

Note that if you run into deadlocks with Galera Cluster, you can send all writes to a single node and avoid concurrency issues across nodes. Read requests can still be load balanced across all nodes. 

Load Balancers

HAproxy

HAProxy stands for High Availability Proxy, it is an open source TCP/HTTP-based load balancer and proxying solution. It is commonly used to improve the performance and availability of a service by distributing the workload across multiple servers. Over the years it has become the de-facto open source load balancer, is now shipped with most mainstream Linux distributions.

As for this test, we were using HAproxy version 1.4 deployed via ClusterControl

 

IP Virtual Server (IPVS)

IPVS implements transport-layer load balancing, usually called Layer 4 LAN switching, as part of the Linux kernel. IPVS is incorporated into the Linux Virtual Server (LVS), where it runs on a host and acts as a load balancer in front of a cluster of servers.

We chose Keepalived, a load balancing framework that relies on IPVS to load balance Linux based infrastructures. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage a load balanced server pool according their health. High availability and router failover is achieved by VRRP protocol. 

In this test, we configured Keepalived with direct routing where it provides increased performance benefits compared to other LVS networking topographies. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing all outgoing packets through the LVS router. The following is what we defined in keepalived.conf:

 

global_defs {
  router_id lb1
}
vrrp_instance mysql_galera {
  interface eth0
  state MASTER
  virtual_router_id 1
  priority 101
  track_interface {
    eth0
  }
  virtual_ipaddress {
    192.168.50.120 dev eth0
  }}
virtual_server 192.168.50.120 3306{
  delay_loop 2
  lb_algo rr
  lb_kind DR
  protocol TCP
  real_server 192.168.50.101 3306{
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1}}
  real_server 192.168.50.102 3306{
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1}}
  real_server 192.168.50.103 3306{
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1}}}

 

Galera Load Balancer (glb)

glb is a simple TCP connection balancer built by Codership, the creator of Galera Cluster. It was inspired from Pen, but unlike Pen its functionality is limited only to balancing generic TCP connections. glb is multithreaded, so it can utilize multiple CPU cores. According to Seppo from Codership, the goal with glb was to have a high throughput load balancer which will not be a bottleneck when benchmarking Galera Cluster

The project is active (although at the time of writing, it is not listed on Codership’s download page). Here is our /etc/sysconfig/glbd content:

 

LISTEN_ADDR="8010"CONTROL_ADDR="127.0.0.1:8011"CONTROL_FIFO="/var/run/glbd.fifo"THREADS="4"MAX_CONN=256DEFAULT_TARGETS="192.168.50.101:3306:10 192.168.50.102:3306:10 192.168.50.103:3306:10"OTHER_OPTIONS="--round-robin"

MySQL Proxy

When MySQL Proxy was born, it was a promising technology and attracted quite a few users. It is extensible through the LUA scripting language, which makes it a very flexible technology. MySQL proxy does not embed an SQL parser and basic tokenization was made through LUA language. Although MySQL Proxy has been in alpha status for a long time, we do find it in use in production environments.

Here is how we configured mysql-proxy.cnf:

[mysql-proxy]

daemon = true
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
max-open-files = 1024
plugins = admin,proxy
user = mysql-proxy
event-threads = 4
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.50.101:3306,192.168.50.102:3306,192.168.50.103:3306
admin-lua-script = /usr/lib64/mysql-proxy/lua/admin.lua
admin-username = admin
admin-password = admin

Benchmarks

We used 5 virtual machines in one physical host (each with 4 vCPU/2 GB RAM/10GB SSD) with following roles:

  • 1 host for ClusterControl to collect monitoring data from Galera Cluster
  • 1 host for load balancers. Sysbench 0.5 is installed in this node to minimize network overhead
  • 3 hosts for MySQL Galera Cluster (5.6.16 wsrep_25.5.r4064)

Above description can be illustrated as following figure:

Since they are almost different and have no standard line in configuration, we are going to use the fairest options among them:

  • Number of threads = 4
  • Load balancing algorithm = Round robin
  • Maximum connections = 256

We prepared approximately one million rows of data in 12 separate tables, taking 400MB of disk space: 

$ sysbench \

--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=100000 \
--oltp-tables-count=12 \
--num-threads=4 \
--mysql-host=192.168.50.101 \
--mysql-port=3306 \
--mysql-user=sbtest \
--mysql-password=password \
--test=/usr/share/sysbench/tests/db/parallel_prepare.lua \
run

InnoDB data file should fit into the buffer pool to minimize IO overhead and this test is expected to be CPU-bound. The following was the command line we used for the OLTP benchmarking tests:

$ sysbench \

--db-driver=mysql \
--num-threads=4 \
--max-requests=5000 \
--oltp-tables-count=12 \
--oltp-table-size=100000 \
--oltp-test-mode=complex \
--oltp-dist-type=uniform \
--test=/usr/share/sysbench/tests/db/oltp.lua \
--mysql-host= \
--mysql-port= \
--mysql-user=sbtest \
--mysql-password=password \
run

The command above was repeated 100 times on each load balancer including one control test as baseline where we specified a single MySQL host for sysbench to connect to. Sysbench is also able to connect to several MySQL hosts and distribute connections on round-robin basis. We included this test as well.

Observations and Results

Observations

The physical host’s CPU constantly hit 100% throughout the test. This is not a good sign for proxy-based load balancer since they need to fight for CPU time. A better test would be to run on an bare metal servers, or isolate the load balancer host on a separate physical host.

The results obtained from this test are relevant only if you run in a virtual machine environment.

 

Results

We measured the total throughput (transactions per second) taken from the Sysbench result. The following chart shows the number of transactions that the database cluster can serve in one second (higher is better):

From the graph, we can see that IPVS (Keepalived) is the clear winner and has the slightest overhead since it runs in kernel level and mainly just routes packets. Keepalived is a userspace program to do health checks and manage the kernel interface to LVS. HAproxy, glb and MySQL proxy which operates on the higher layer performs ~40% slower than IPVS and 25% slower as compared to Sysbench round robin, as shown in the chart below:

Unlike IPVS which is similar to a router, proxy-based load balancers (glb, HAproxy, MySQL Proxy) operate on layer 7. It can understand a number of backend servers protocol and able to do packet-level inspection, protocol routing and is far more customizable. Proxy-based load balancers operate on application level, and is easier to configure them with firewalls. However, they are significantly slower.

IPVS on the other hand, is pretty hard to configure and can be confusing to deal with especially if you running on NAT where your topology has to be setup so that your LVS load balancer is also the default gateway. Since it is part of the kernel, upgrading LVS might mean kernel change and reboot. Take note that the director and realservers can't access the virtual service, you need an outside client (another host) to access it. So LVS has an architecture impact, if you run on NAT, client and servers cannot run in the same VLAN and subnet. It is only a packet forwarder, and therefore is very fast.

If you need to balance solely on number of connections or your architecture is running on a CPU-bound environment, the layer 4 load balancer should suffice. On the other hand, if you want to have a more robust load balancing functionality with simpler setup, you can use the proxy-based load balancer like HAproxy. Faster doesn’t mean robust, and slower doesn’t necessarily mean it is not worth it. In a future post, we plan on looking at MaxScale. Let us know if you are using any other load balancers.

Blog category:

Multi-source Replication with Galera Cluster for MySQL

$
0
0

Multi-source replication means that one server can have multiple masters from which it replicates. Why multi-source? One good reason is to consolidate databases (e.g. merge your shards) for analytical reporting or as a centralized backup server. MariaDB 10 already has this feature, and MySQL 5.7 will also support it. 

It is possible to set up your Galera Cluster as an aggregator of your masters in a multi-source replication setup, we’ll walk you through the steps in this blog. Note that the howto is for Galera Cluster for MySQL (Codership) and Percona XtraDB Cluster. In a separate post, we’ll show you how to configure MariaDB Cluster 10 instead. If you would like to use MySQL Cluster (NDB) as aggregator, then check out this blog.

 

Galera Cluster as Aggregator/Slave

 

Galera cluster can operate both as MySQL master and slave. Each Galera node can act as a slave channel accepting replication from a master. The number of slave channels should be equal or less to the number of Galera master nodes in the cluster. So, if you have a three-node Galera cluster, you can have up to three different replication sources connected to it. Note that in MariaDB Galera Cluster 10, you can configure as many sources as you want since each node supports multi-source replication. 

To achieve multi-source replication in MySQL 5.6, you cannot have GTID enabled for Galera Cluster. GTID will cause our Galera cluster to work as a single unit (imagine one single slave server), since it globally preserves the MySQL GTID events on the cluster.  So the cluster will not be able to replicate from more than one master. Hence, we will use the “legacy” way to determine the starting binary log file and position. On a side note, enabling GTID is highly recommended if your Galera Cluster acts as a MySQL master, as described in this blog post.

We will setup multi-source replication as below:

We have 3 standalone MySQL servers (masters), and each master has a separate database: mydb1, mydb2 and mydb3. We would like to consolidate all 3 databases into our Galera cluster.

 

Setting Up Masters

 

1. On each standalone MySQL server, configure it as a master by adding a server ID, enabling binary logging with ROW format:

# mysql1 my.cnf
server-id=101
log-bin=binlog
binlog-format=ROW

 

# mysql2 my.cnf
server-id=102
log-bin=binlog
binlog-format=ROW

 

# mysql3 my.cnf
server-id=103
log-bin=binlog
binlog-format=ROW

 

2. Then, create and grant a replication user:

mysql>GRANT REPLICATION SLAVE ON*.*TO'slave'@'%' IDENTIFIED BY 'slavepassword';
mysql> FLUSH PRIVILEGES;

 

Setting Up Slaves

 

The asynchronous replication slave thread is stopped when a node tries to apply replication events and it is in a non-primary state. It remains stopped after successfully re-joining the cluster as default. It is recommended to configure wsrep_restart_slave=1 which enables the MySQL slave to be restarted automatically when the node rejoins the cluster.

1. On each of the Galera node, configure MySQL configuration as below:

# galera1 my.cnf
server-id=201
log-bin=binlog
log-slave-updates=1
wsrep-restart-slave=1

 

# galera2 my.cnf
server-id=202
log-bin=binlog
log-slave-updates=1
wsrep-restart-slave=1

 

# galera3 my.cnf
server-id=203
log-bin=binlog
log-slave-updates=1
wsrep-restart-slave=1

 

** Perform a rolling restart of the cluster to apply the new changes. For ClusterControl users, go to ClusterControl > Upgrades > Rolling Restart.

 

2. Assume that you already granted the database user on Galera hosts from MySQL nodes, dump each MySQL database on the respective Galera node:

galera1:

$ mysqldump -u mydb1 -p-h mysql1 --single-transaction--master-data=1 mydb1 > mydb1.sql

 

galera2:

$ mysqldump -u mydb2 -p-h mysql2 --single-transaction--master-data=1 mydb2 > mydb2.sql

 

galera3:

$ mysqldump -u mydb3 -p-h mysql3 --single-transaction--master-data=1 mydb3 > mydb3.sql

 

** To ensure Galera replicates data smoothly, ensure all tables are running on InnoDB. Before you restore, you can use the following command to convert the dump file if it contains MyISAM tables:

$ sed-i's|MyISAM|InnoDB|g'[the dump file]

 

3. On each Galera node, create the corresponding database and restore the dump files into the Galera Cluster:

galera1:

$ mysql -uroot-p-e'CREATE SCHEMA mydb1'
$ mysql -uroot-p mydb1 < mydb1.sql

 

galera2:

$ mysql -uroot-p-e'CREATE SCHEMA mydb2'
$ mysql -uroot-p mydb2 < mydb2.sql

 

galera3:

$ mysql -uroot-p-e'CREATE SCHEMA mydb3'
$ mysql -uroot-p mydb3 < mydb3.sql

 

** Above steps should be performed on each of the Galera node so that each slave can position correctly in the binary log as written in the dump file.

 

4. Point each Galera node to its respective master:

galera1:

mysql> STOP SLAVE;
mysql>CHANGE MASTER TO MASTER_HOST ='mysql1', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword';
mysql>START SLAVE;

 

galera2:

mysql> STOP SLAVE;
mysql>CHANGE MASTER TO MASTER_HOST ='mysql2', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword';
mysql>START SLAVE;

 

galera3:

mysql> STOP SLAVE;
mysql>CHANGE MASTER TO MASTER_HOST ='mysql3', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword';
mysql>START SLAVE;

 

Verify if slaves start correctly:

mysql>SHOW SLAVE STATUS\G

 

And ensure you get:

...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

 

At this point, our Galera cluster has starting to accept replication events from three different sources.

 

ClusterControl will detect if the Galera node is running as a slave node automatically. A new node indicator for slave and Slave Nodes table grid will appear showing the slave monitoring data in the Overview page:

 

We can now see that our databases from multiple sources have been replicated into the cluster, as shown in the DB Growth screenshot below:

 

Caveats

 

  • Since MySQL replication is single threaded, Galera node will apply replication events as fast as native MySQL slave. As a workaround, you can configure wsrep_mysql_replication_bundle=n, to group n MySQL replication transactions in one large transaction.
  • MySQL replication events are treated as regular MySQL clients, they must go through Galera replication pipeline at commit time. This adds some delay before commit.

 

Blog category:

Multi-source Replication with MariaDB Galera Cluster

$
0
0

MariaDB 10 supports multi-source replication, and each MariaDB Galera node can have up to 64 masters connected to it. So it is possible to use a MariaDB Cluster as an aggregator for many single-instance MariaDB master servers.

In this blog post, we are going to show you how to setup multi-source replication with MariaDB Galera Cluster, where one of the Galera nodes is acting as slave to 3 MariaDB masters (see diagram below). If you would like to set this up with Percona XtraDB Cluster or Galera Cluster (Codership), please read this post instead.

 

MariaDB GTID vs MySQL GTID

 

MariaDB has a different implementation of Global Transaction ID (GTID), and is enabled by default starting from MariaDB 10.0.2. Multi-source replication in MariaDB works with both GTID and the legacy binlog file and position, as compared to the MySQL implementation

A GTID consists of three separated values:

  • Domain ID - Replication domain. A replication domain is a server or group of servers that generate a single, strictly ordered replication stream.
  • Server ID - Server identifier number to enable master and slave servers to identify themselves uniquely.
  • Event Group ID - A sequence number for a collection of events that are always applied as a unit. Every binlog event group (eg. transaction, DDL, non-transactional statement) is annotated with its GTID.

The figure below illustrates the differences between the two GTIDs:

In MariaDB, there is no special configuration needed on the server to start using GTID. Some of MariaDB GTID advantages:

  • It is easy to identify which server or domain the event group is originating from
  • You do not necessarily need to turn on binary logging on slaves
  • It allows multi-source replication with distinct domain ID
  • Enabling GTID features is dynamic, you don’t have to restart the MariaDB server
  • The state of the slave is recorded in a crash-safe way

Despite the differences between these two, it is still possible to replicate from MySQL 5.6 to MariaDB 10.0 or vice versa. However, you will not be able to use the GTID features to automatically pick the correct binlog position when switching to a new master. Old-style MySQL replication will work. We highly recommend you to read the MariaDB GTID knowledge base.

 

MariaDB Galera Cluster as Slave

 

In our setup, we used MariaDB 10.0.14 on masters and MariaDB Galera Cluster 10.0.14 as slave. We have three master servers (mariadb1, mariadb2, mariadb3) and each master has a separate database: mydb1, mydb2 and mydb3. The 3 servers replicate to a Galera node (mgc1) in multi-source mode.

When using multi-source replication, where a single slave connects to multiple masters, each master needs to be configured with its own distinct domain ID.

MariaDB provides a function to easily determine the GTID value according to the binary log file and position, which is usually recorded by the MySQL backup applications (mysqldump or Xtrabackup) :

MariaDB>SELECT BINLOG_GTID_POS('mysql-bin.000003',155267212);+-----------------------------------------------+| BINLOG_GTID_POS('mysql-bin.000003',155267212)|+-----------------------------------------------+|1-101-340|+-----------------------------------------------+

 

Setting up Masters

 

1. On each standalone MariaDB server, configure it as a master by adding a server ID, domain ID and enable binary logging with ROW format under [mysqld] directive:

# mariadb1 my.cnf
server-id=101
log-bin=binlog
gtid-domain-id=1
binlog-format=ROW

 

# mariadb2 my.cnf
server-id=102
log-bin=binlog
gtid-domain-id=2
binlog-format=ROW

 

# mariadb3 my.cnf
server-id=103
log-bin=binlog
gtid-domain-id=3
binlog-format=ROW

 

2. Perform reset master so we can get a correct binary log entry with assigned domain ID:

MariaDB> RESET MASTER;

 

3. Then, create and grant a replication user:

MariaDB>GRANT REPLICATION SLAVE ON*.*TO'slave'@'%' IDENTIFIED BY 'slavepassword';
MariaDB> FLUSH PRIVILEGES;

 

Setting up Slaves

 

The asynchronous replication slave thread is stopped when a node tries to apply replication events and it is in a non-primary state. By default, it remains stopped after successfully re-joining the cluster. It is recommended to configure wsrep_restart_slave=1 which enables the MySQL slave to be restarted automatically when the node rejoins the cluster.

 

1. On the corresponding Galera nodes, set the configuration as below:

# mgc1 my.cnf
server-id=201binlog_format=ROW
log_slave_updates=1log_bin=binlog
wsrep-restart-slave=1

 

# mgc2 my.cnf
server-id=202binlog_format=ROW
log_slave_updates=1log_bin=binlog
wsrep-restart-slave=1

 

# mgc3 my.cnf
server-id=203binlog_format=ROW
log_slave_updates=1log_bin=binlog
wsrep-restart-slave=1

 

** Perform a rolling restart of the cluster to apply the new changes. For ClusterControl users, go to ClusterControl > Upgrades > Rolling Restart.

 

2. Assuming that you already granted the database user on Galera hosts from the MariaDB nodes, dump each MariaDB database on the Galera node (mgc1):

$ mysqldump -u mydb1 -p-h 10.0.0.61 --single-transaction--master-data=2 mydb1 > mydb1.sql
$ mysqldump -u mydb2 -p-h 10.0.0.62 --single-transaction--master-data=2 mydb2 > mydb2.sql
$ mysqldump -u mydb3 -p-h 10.0.0.63 --single-transaction--master-data=2 mydb3 > mydb3.sql

 

** To ensure Galera replicates data smoothly, ensure all tables are running on InnoDB. Before you restore, you can use the following command to convert the dump file if it contains MyISAM tables:

$ sed-i's|MyISAM|InnoDB|g'[the dump file]

 

3. On mgc1, create the corresponding databases and restore the dump files:

$ mysql -uroot-p-e'CREATE SCHEMA mydb1; CREATE SCHEMA mydb2; CREATE SCHEMA mydb3'
$ mysql -uroot-p mydb1 < mydb1.sql
$ mysql -uroot-p mydb2 < mydb2.sql
$ mysql -uroot-p mydb3 < mydb3.sql

 

4. Before we start replication, we need to determine the GTID value on each master. From the dump file itself, we can get both the binary log position and gtid_slave_pos (generated by --master-data=2). Extract the gtid_slave_pos value:

$ head-100 mydb1.sql |grep gtid_slave_pos
-- SET GLOBAL gtid_slave_pos='1-101-676';
 
$ head-100 mydb2.sql |grep gtid_slave_pos
-- SET GLOBAL gtid_slave_pos='2-102-338';
 
$ head-100 mydb3.sql |grep gtid_slave_pos
-- SET GLOBAL gtid_slave_pos='3-103-338';

 

5. Combine all retrieved GTID values and set them as gtid_slave_pos on mgc1:

MariaDB>SETGLOBAL gtid_slave_pos ="1-101-676,2-102-338,3-103-338";

 

6. Configure master connection for each replication stream, distinguish by default_master_connection session variable:

MariaDB>SET @@default_master_connection='mariadb1';
MariaDB>CHANGE MASTER 'mariadb1'TO MASTER_HOST='10.0.0.61', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword', MASTER_USE_GTID=slave_pos; 
MariaDB>SET @@default_master_connection='mariadb2';
MariaDB>CHANGE MASTER 'mariadb2'TO MASTER_HOST='10.0.0.62', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword', MASTER_USE_GTID=slave_pos; 
MariaDB>SET @@default_master_connection='mariadb3';
MariaDB>CHANGE MASTER 'mariadb3'TO MASTER_HOST='10.0.0.63', MASTER_PORT=3306, MASTER_USER='slave', MASTER_PASSWORD='slavepassword', MASTER_USE_GTID=slave_pos;

 

7. Start all slaves:

MariaDB>STARTALL SLAVES;
MariaDB>SHOWWARNINGS;+-------+------+--------------------------+| Level | Code | Message                  |+-------+------+--------------------------+| Note  |1937| SLAVE 'mariadb2' started || Note  |1937| SLAVE 'mariadb3' started || Note  |1937| SLAVE 'mariadb1' started |+-------+------+--------------------------+

 

8. Verify that all slaves started correctly:

MariaDB>SHOWALL SLAVES STATUS\G

 

And ensure you get on each of the connection:

Connection_name: mariadb1
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Connection_name: mariadb2
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Connection_name: mariadb3
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

 

At this point, our MariaDB Galera Cluster has started to accept replication events from three different sources via mgc1.

 

From your ClusterControl dashboard, you will notice the incoming replication load on mgc1 (10.0.0.71) is propagated to the other nodes of the cluster:

 

We can now see that our databases from our three master sources have been replicated into the cluster, as shown in ClusterControl > Performance >DB Growth:

 

Blog category:

Automation & Management of MariaDB Galera Clusters

$
0
0

MariaDB Galera Cluster involves more effort and resource to administer than standalone MariaDB systems. If you would like to learn how to better manage your MariaDB cluster, then this webinar series is for you. 

 

We will give you practical advice on how to introduce clusters into your MariaDB / MySQL  environment, automate deployment and make it easier for operational staff to manage and monitor the cluster using ClusterControl.

 

Language, Date & Time: 

 

English - Tuesday, September 30th @ 11am CEST: Management & Automation of MariaDB Galera Clusters

French - Tuesday, October 7th @ 10am CEST: Gestion et Automatisation de Clusters Galera pour MariaDB

German - Wednesday, October 8th @ 10am CEST: Verwaltung und Automatisierung von MariaDB Galera Cluster

 

High availability cluster configurations tend to be complex, but once they are designed, they tend to be duplicated many times with minimal variation. Automation can be applied to provisioning, upgrading, patching and scaling. DBAs and Sysadmins can then focus on more critical tasks, such as performance tuning, query design, data modeling or providing architectural advice to application developers. A well managed system can mitigate operational risk, that can result in significant savings and reduced downtime. 

 

mariadbgaleracluster.jpg

 

MariaDB Roadshow - London

 

And if you’re in London this September, do join us at the MariaDB Roadshow event on Thursday, September 18th. We’ll be talking about Automation & Management of Database Clusters there as well and would love to talk to you in person! 

 

We look forward to talking to you during one of the webinars and/or see you at the MariaDB Roadshow!

 

ABOUT CLUSTERCONTROL

 

Setting up, maintaining and operating a database cluster can be tricky. ClusterControl gives you the power to deploy, manage, monitor and scale entire clusters efficiently and reliably. ClusterControl supports a variety of MySQL-based clusters (Galera, NDB, 5.6 Replication) as well as MongoDB/TokuMX-based clusters.

Blog category:

Simple Backup Management of Galera Cluster using s9s_backup

$
0
0

Percona XtraBackup is a great backup tool with lots of nice features to make online and consistent backups, although the variety of options can be a bit overwhelming. s9s_backup tries to make it simpler for users, it creates an easy to use interface for XtraBackup features such as full backups, incremental backups, streaming/non-streaming, and parallel compression.

Backups are organized into backup sets, consisting of a full backup and zero or more incremental backups. s9s_backup manages the LSNs (Log Sequence Number) of the XtraBackups. The backup set can then be restored as one single unit using just one command.

In earlier posts, we covered various ways on restoring your backup files onto a Galera Cluster, including point-in-time recovery and a Percona XtraBackup vs mysqldump comparison. In this post, we will show you how to restore your backup using s9s_backup, which comes with every ClusterControl installation. It is located under /usr/bin directory and can be called directly from your terminal environment.

 

s9s_backup vs s9s_backupc

The difference between the two utilities is the location where the backup data is stored. s9s_backupc will store the backup on the controller and it will be initiated from the ClusterControl server, while s9s_backup initiates and stores the backup locally on the database node. However, for restoring the backup, you can use any of the utilities regardless of your backup storage location. They will perform just the same.

If the backup is to be stored on the Galera node, s9s_backup and s9s_backup_wd will be copied over to the target node so they are initiated locally. s9s_backup_wd is a watchdog process to check that XtraBackup does not terminate and there is no error in the XtraBackup log file.

ClusterControl currently does not support partial backups, it performs a backup against all databases and tables in the cluster. Backups created by ClusterControl can be restored either using s9s_backup/s9s_backupc utilities, or manually. It is recommended to update the s9s_backup/s9s_backupc scripts to the latest version:

$ git clone https://github.com/severalnines/s9s-admin
$ cp cluster/s9s_backup*/usr/bin

 

mysqldump

ClusterControl generates a set of three mysqldump files with the following suffixes:

  • _data - all schemas’ data
  • _schema - all schemas’ structure
  • _mysqldb - mysql system database

The last output of the backup file would be a gunzip compressed file. ClusterControl executes the following commands for every backup job respectively:

$ gunzip[compressed mysqldump file]
$ mysql -u[user]-p[password]<[mysqldump file]

The restore process is pretty straightforward for mysqldump files. You can just redirect the dump contents to a mysql client and the statements will be executed by the MySQL server:

$ cd/root/backups/mysqldump
$ gunzip*.gz
$ mysql -u root -p< mysqldump_2014-12-03_042604_schema.sql
$ mysql -u root -p< mysqldump_2014-12-03_042604_data.sql
$ mysql -u root -p< mysqldump_2014-12-03_042604_mysqldb.sql #optional

 

Percona XtraBackup

For XtraBackup, you need to prepare the data before you can restore it. The restoration process will restore the database to the state it was when the backup was taken. If you want to restore at a certain point-in-time, you need to have binary logging enabled as described in this blog post.

To restore XtraBackup, locate the backup set ID from ClusterControl > Backups > Reports and together with the backup location. If the storage location is on ClusterControl host, you can directly use s9s_backup/s9s_backupc script to restore. Else, you need to copy the backup files to the ClusterControl host under [backup directory]/BACKUP-[backup set ID], e.g: /root/backups/BACKUP-17.

Next, start the restoration process by running the following command on the ClusterControl node:

$ s9s_backupc --restore-i[cluster ID]-b[backup set ID]-t[full path of restoration directory]

 

For example, to prepare restoration data from a full backup:

$ mkdir/root/restore
$ s9s_backupc --restore-i1-b13-t/root/restore/

You can also exclude some of the backup IDs in case you just want to restore up to a certain point. For example, the following backup list contains a backup set consists of backup ID 19 to 23:

To restore up to incremental backup ID 21, you can invoke the -e [backup ID] following command:

$ s9s_backupc --restore-i1-b19-t/root/restore -e21

This will instruct ClusterControl to prepare the restoration data from backup ID 19, 20 and 21 and skip all backups with ID higher than 21.

Following is the expected outcome for s9s_backupc:

141202 23:33:20  innobackupex: completed OK!
Restore OK
To copy back the restored data into your datadir of mysqld do:
* Shutdown the cmon controller to prevent automatic recovery
* Shutdown the cmon agent on this host to prevent automatic recovery (if any)
* Shutdown all the mysql servers in the cluster
* Copy /root/restore/19/ to all the mysql servers in the cluster, e.g:
scp -r /root/restore/19 <user>@<target_server>:~
* On the target_server:
innobackupex --copy-back 19/
* and don't forget to:
chown mysql:mysql -R <mysqld datadir>
* Start up the cluster again.

As per the instructions, we then need to perform the pre-restoration steps:

$ s9s_galera --stop-cluster-i1
$ service cmon stop
$ scp-r/root/restore/19 root@galera1:~/root/
$ scp-r/root/restore/19 root@galera2:~/root/
$ scp-r/root/restore/19 root@galera3:~/root/

SSH into galera1, galera2, galera3 and restore the prepared data to the active MySQL data directory using the copy-back command:

$ rm-rf/var/lib/mysql/*
$ innobackupex --copy-back/root/19
$ chown mysql:mysql -R/var/lib/mysql

Finally, on the ClusterControl node, start the Galera cluster and the ClusterControl CMON controller service:

$ s9s_galera --start-cluster-i1-d 192.168.50.81
$ service cmon start

That’s it!

Blog category:

HowTo: Offline Upgrade of Galera Cluster to MySQL 5.6 or MariaDB 10

$
0
0

MySQL 5.6 has an extensive list of new features and changes, so upgrading from a previous version can be risky if not tested extensively. For this reason, we recommend our users to read and understand the changes before doing the upgrade. If you are on older MySQL versions, it is probably time to think about upgrading. MySQL 5.6 was released in February 2013, that’s almost two years ago!

A major upgrade, e.g., from MySQL 5.5 to 5.6 or MariaDB 5.5 to 10, requires the former MySQL/MariaDB server related packages to be uninstalled. In Galera Cluster, there are two ways to upgrade; either by performing offline upgrade (safer, simpler, requires service downtime) or online upgrade (more complex, no downtime). 

 

In this blog post, we are going to show you how to perform an offline upgrade on Galera-based MySQL/MariaDB servers, from MySQL 5.5.x to 5.6 or MariaDB 5.5 to 10.x with Galera 3.x, on Redhat and Debian-based systems. The online upgrade procedure will be covered in a separate post. Prior to the upgrade, determine the database vendor and operating system that is running at ClusterControl > Settings > General Settings > Version:

Note that different database vendor and operating system combinations use different installation steps, package names, versions and dependencies. 

 

Offline Upgrade in Galera

 

Offline upgrade is recommended if you can afford scheduled downtime. The steps are straightforward and the probability for failure is significantly lower. Performing an online upgrade gives you availability at the cost of operational simplicity.

When performing an offline upgrade, the following steps are required:

  1. Stop the Galera cluster
  2. Remove the existing MySQL/MariaDB 5.5 related packages
  3. Install the MySQL 5.6/MariaDB 10.x related packages
  4. Perform post-installation configuration
  5. Execute mysql_upgrade command
  6. Start the Galera cluster

 

Pre-Upgrade

 

Before performing a major upgrade, you need to have following list checked:

  • Read and understand the changes that are going to happen with the new version
  • Note the unsupported configuration options between the major versions
  • Determine your cluster ID from the ClusterControl summary bar
  • garbd nodes will also need to be upgraded
  • All nodes must have internet connection
  • ClusterControl auto recovery must been turned off throughout the exercise

 

To disable ClusterControl auto recovery, add the following line inside /etc/cmon.cnf:

enable_autorecovery=0

And restart CMON service:

$ service cmon restart

 

Offline Upgrade on Redhat-based Systems

 

Galera Vendor: Codership

 

The following steps should be performed on each of the Galera nodes unless specified otherwise.

 

1. Download MySQL 5.6 server and Galera provider v25.3.x packages from http://galeracluster.com/downloads/:

$ wget https://launchpad.net/codership-mysql/5.6/5.6.16-25.5/+download/MySQL-server-5.6.16_wsrep_25.5-1.rhel6.x86_64.rpm 
$ wget https://launchpad.net/galera/3.x/25.3.5/+download/galera-25.3.5-1.rhel6.x86_64.rpm

 

2. Since we downloaded MySQL Server 5.6.16, we also need to upgrade the MySQL client package from MySQL Community Server archive page with the corresponding version:

$ wget http://downloads.mysql.com/archives/get/file/MySQL-client-5.6.16-1.el6.x86_64.rpm

 

3. On the ClusterControl node, stop the Galera cluster by using the s9s_galera command with its respective cluster ID. This command will shutdown Galera nodes one at a time and it will not trigger ClusterControl auto recovery. You can determine the cluster ID from the ClusterControl UI summary bar:

$ s9s_galera --stop-cluster -i1

 

4. Remove the existing MySQL and Galera packages without dependencies:

$ rpm -qa | grep -ie ^mysql -e ^galera | xargs rpm -e --nodeps

 

5. Install using yum localinstall so it will satisfy all dependencies:

$ yum -y localinstall galera-25.3.5-1.rhel6.x86_64.rpm MySQL-client-5.6.16-1.el6.x86_64.rpm MySQL-server-5.6.16_wsrep_25.5-1.rhel6.x86_64.rpm

For garbd node, just upgrade the Galera package (no need to uninstall it first) and directly kill the garbd process. ClusterControl will then recover the process and it should be started with the new version immediately:

$ yum remove galera 
$ yum -y localinstall galera-25.3.5-1.rhel6.x86_64.rpm 
$ killall -9 garbd

 

6. Comment or remove following line inside /etc/my.cnf since it is incompatible with MySQL 5.6:

[MYSQLD] 
#engine_condition_pushdown=1

And append the following options recommended for MySQL 5.6:

[MYSQLD] 
explicit_defaults_for_timestamp = 1 
wsrep_sst_method = xtrabackup-v2 
log_error = /var/log/mysqld.log 

[MYSQLD_SAFE] 
log_error = /var/log/mysqld.log

** In MySQL 5.6, it is recommended to use xtrabackup-v2 as the SST method and change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off the MySQL datadir in case of SST.

 

8. Start the MySQL server with --skip-grant-tables to allow mysql_upgrade:

$ mysqld --skip-grant-tables --user=mysql --wsrep-provider='none'&

 

Wait for a moment so the MySQL server starts up, and then perform the mysql_upgrade:

$ mysql_upgrade -u root -p

** You should execute mysql_upgrade each time you upgrade MySQL.

 

9. Kill the running MySQL process:

$ killall -9 mysqld

 

10. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down as the reference/donor node:

$ s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable ClusterControl auto recovery feature as described in the last section.

 

Galera Vendor: Percona

 

The instructions in this section are based on Percona’s upgrade guide. The following steps should be performed on each of the Galera nodes unless specified otherwise.

 

1. On the ClusterControl node, stop the Galera cluster by using s9s_galera command with its respective cluster ID. This command will shutdown Galera nodes one at a time. You can determine the cluster ID from ClusterControl UI summary bar:

$ s9s_galera --stop-cluster -i1

 

2. On the database node, remove the existing MySQL and Galera packages without dependencies:

$ rpm -qa | grep Percona-XtraDB | xargs rpm -e --nodeps

 

3. Install Percona XtraDB Cluster 5.6 related packages:

$ yum -y install Percona-XtraDB-Cluster-server-56 Percona-XtraDB-Cluster-galera-3

For garbd node, remove the existing garbd v2 and install garbd v3. Then, kill the garbd process. ClusterControl will then recover the process and it should be started with the new version immediately:

$ yum remove Percona-XtraDB-Cluster-garbd* 
$ yum -y install Percona-XtraDB-Cluster-garbd-3 
$ killall -9 garbd

 

4. Comment or remove the following line inside /etc/my.cnf since it is incompatible with MySQL 5.6:

[MYSQLD] 
#engine_condition_pushdown=1

And append the following options recommended for MySQL 5.6:

[MYSQLD] 
explicit_defaults_for_timestamp = 1 
wsrep_sst_method = xtrabackup-v2 
log_error = /var/log/mysqld.log 

[MYSQLD_SAFE] 
log_error = /var/log/mysqld.log

** In PXC 5.6, it is recommended to use xtrabackup-v2 as the SST method and change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off MySQL datadir in case of SST.

 

5. Start the MySQL server with --skip-grant-tables to allow mysql_upgrade:

$ mysqld --skip-grant-tables --user=mysql --wsrep-provider='none'&

 

Wait for a moment so the MySQL starts up, and then perform the mysql_upgrade:

$ mysql_upgrade -u root -p

** You should execute mysql_upgrade each time you upgrade MySQL.

 

6. Kill the running MySQL process:

$ killall -9 mysqld

 

7. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down (in this case is 192.168.50.103) as the reference/donor node:

$ s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable the ClusterControl auto recovery feature as described in the last section.

 

Galera Vendor: MariaDB

 

The following steps should be performed on each of the Galera nodes unless specified otherwise.

 

1. Edit the baseurl value in /etc/yum.repos.d/MariaDB.repo to use MariaDB 10.x repository:

baseurl = http://yum.mariadb.org/10.0/centos6-amd64

Then, remove the yum metadata cache so it will use the latest configured repository instead:

$ yum clean metadata

 

2. On the ClusterControl node, stop the Galera cluster by using the s9s_galera command with its respective cluster ID. This command will shutdown Galera nodes one at a time. You can determine the cluster ID from the ClusterControl UI summary bar:

$ s9s_galera --stop-cluster -i1

 

3. Remove the existing MariaDB and Galera packages without dependencies:

$ rpm -qa | grep -e ^MariaDB -e ^galera | xargs rpm -e --nodeps

 

4. Install MariaDB Galera related packages:

$ yum -y install MariaDB-Galera-server.x86_64 MariaDB-client.x86_64 galera

 

For garbd node, just upgrade the Galera package (no need to uninstall it first) and directly kill the garbd process. ClusterControl will then recover the process and it should be started with the new version immediately:

$ yum remove galera 
$ yum install galera 
$ killall -9 garbd

 

5. The previous MySQL configuration will be saved as /etc/my.cnf.rpmsave. Reuse the file by renaming it to /etc/my.cnf:

$ mv /etc/my.cnf.rpmsave /etc/my.cnf

 

6. Comment or remove the following line in /etc/my.cnf since it is incompatible with MariaDB 10.x:

[MYSQLD] 
#engine_condition_pushdown=1

And append the following options recommended for MariaDB 10.x:

[MYSQLD] 
wsrep_sst_method = xtrabackup-v2 
log_error = /var/log/mysqld.log 

[MYSQLD_SAFE] 
log_error = /var/log/mysqld.log

** In MariaDB 10, it is recommended to use xtrabackup-v2 as the SST method and change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off MySQL datadir in case of SST.

 

7. Start the MariaDB server with --skip-grant-tables to allow mysql_upgrade:

$ mysqld --skip-grant-tables --user=mysql --wsrep-provider='none'&

And then perform the mysql_upgrade:

$ mysql_upgrade -u root -p

** You should execute mysql_upgrade each time you upgrade MariaDB.

 

8. Kill the running MariaDB process:

$ killall -9 mysqld

 

9. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down as the reference/donor node:

$ s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable ClusterControl auto recovery feature as described in the last section.

 

Offline Upgrade on Debian-based systems

 

Galera Vendor: Codership

 

If you deployed Galera cluster using the Severalnines Configurator, you would have had MySQL 5.5 installed using tarball under directory /usr/local/mysql. In MySQL 5.6, we urge users to use the DEB package instead, so expect the MySQL basedir to change to /usr, as shown in the instructions below.

 

The following steps should be performed on each of the Galera nodes unless specified otherwise. Omit sudo if you run as root.

 

1. Download MySQL 5.6 server and Galera provider v25.3.x packages from http://galeracluster.com/downloads/:

$ wget https://launchpad.net/codership-mysql/5.6/5.6.16-25.5/+download/mysql-server-wsrep-5.6.16-25.5-amd64.deb 
$ wget https://launchpad.net/galera/3.x/25.3.5/+download/galera-25.3.5-amd64.deb

 

2. Install a third-party MySQL 5.6 package repository to facilitate the installation:

$ sudo apt-get -y install software-properties-common python-software-properties 
$ sudo add-apt-repository -y ppa:ondrej/mysql-5.6 
$ sudo apt-get update

 

3. On the ClusterControl node, stop the Galera cluster by using the s9s_galera command with its respective cluster ID. This command will shutdown the Galera nodes one at a time. You can determine the cluster ID from the ClusterControl UI summary bar:

$ sudo s9s_galera --stop-cluster -i1

 

4. Install the MySQL 5.6 client package from repository, install the downloaded MySQL Server 5.6 and Galera packages:

$ sudo apt-get -y install mysql-client-5.6 
$ sudo dpkg -i mysql-server-wsrep-5.6.16-25.5-amd64.deb 
$ sudo dpkg -i galera-25.3.5-amd64.deb

** Accept the default value for any prompt during apt-get install command

 

For garbd node, just upgrade the Galera package (no need to uninstall it first) and directly kill the garbd process. ClusterControl will then recover the process and it should start with the new version immediately:

$ sudo dpkg -i galera-25.3.5-amd64.deb 
$ sudo killall -9 garbd

 

5. Comment or remove the following line inside /etc/mysql/my.cnf since it is incompatible with MySQL 5.6:

[MYSQLD] 
#engine_condition_pushdown=1

 

And append following options recommended for MySQL 5.6:

[MYSQLD] 
basedir = /usr 
explicit_defaults_for_timestamp = 1 
log_error = /var/log/mysql.log 

[MYSQLD_SAFE] 
basedir = /usr 
log_error = /var/log/mysql.log

** The basedir has changed to /usr with the new DEB package installation. It is recommended to change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off MySQL datadir in case of SST.

 

6. Start the MySQL server with --skip-grant-tables to allow mysql_upgrade:

$ sudo mysqld --skip-grant-tables --user=mysql --wsrep-provider='none'&

And then perform the mysql_upgrade:

$ mysql_upgrade -u root -p

** You should execute mysql_upgrade each time you upgrade MySQL.

 

7. Terminate the MySQL server for mysql_upgrade:

$ sudo killall -9 mysqld

 

8. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down as the reference/donor node:

$ sudo s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable ClusterControl auto recovery feature as described in the last section.

 

Once all Galera nodes are up, you can safely remove the previous MySQL 5.5 installation under /usr/local:

$ sudo rm -Rf /usr/local/mysql*

 

Galera Vendor: Percona

 

The instructions in this section are based on Percona’s upgrade guide. The following steps should be performed on each of the Galera nodes unless specified otherwise. Omit sudo if you run as root.

 

1. On the ClusterControl node, stop the Galera cluster by using the s9s_galera command with its respective cluster ID. This command will shutdown the Galera nodes one at a time. You can determine the cluster ID from the ClusterControl UI summary bar:

$ sudo s9s_galera --stop-cluster -i1

 

2. On the database node, remove the existing MySQL and Galera packages:

$ sudo apt-get remove percona-xtradb-cluster-server-5.5 percona-xtradb-cluster-galera-2.x percona-xtradb-cluster-common-5.5 percona-xtradb-cluster-client-5.5

 

3. Comment the following lines in /etc/mysql/my.cnf:

[MYSQLD]
#engine_condition_pushdown=1
#wsrep_provider=/usr/lib/libgalera_smm.so

And append the following options recommended for MySQL 5.6:

[MYSQLD] 
explicit_defaults_for_timestamp = 1 
wsrep_sst_method = xtrabackup-v2 
log_error = /var/log/mysql.log 
wsrep_provider=none 

[MYSQLD_SAFE] 
log_error = /var/log/mysql.log

** In PXC 5.6, it is recommended to use xtrabackup-v2 as the SST method and change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off the MySQL datadir in case of SST.

 

4. Install Percona XtraDB Cluster 5.6 related packages:

$ sudo LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -y install percona-xtradb-cluster-56

For garbd node, remove the existing garbd v2 and install garbd v3. Then, kill the garbd process. ClusterControl will then recover the process and it should be started with the new version immediately:

$ sudo sed -i '1 a\exit 0 #temp_workaround' /etc/init.d/garbd 
$ sudo LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -y install percona-xtradb-cluster-garbd-3.x 
$ sudo sed -i '/exit 0 #temp_workaround/d' /etc/init.d/garbd 
$ killall -9 garbd

 

5. Perform the mysql_upgrade command:

$ mysql_upgrade -u root -p

** You should execute mysql_upgrade each time you upgrade MySQL.

 

6. Kill the running MySQL process:

$ sudo service mysql stop

 

7. Uncomment the following line in /etc/mysql/my.cnf:

wsrep_provider=/usr/lib/libgalera_smm.so

And remove or comment the following line:

#wsrep_provider=none

 

8. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down (in this case is 192.168.50.103) as the reference/donor node:

$ sudo s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable ClusterControl auto recovery feature as described in the last section.

 

Galera Vendor: MariaDB

 

The following steps should be performed on each of the Galera nodes unless specified otherwise. Omit sudo if you run as root.

 

1. Edit the repository URL in /etc/apt/sources.list.d/MariaDB.list to use MariaDB 10.x repository, similar to below:

deb http://ftp.osuosl.org/pub/mariadb/repo/10.0/ubuntu precise main 
deb-src http://ftp.osuosl.org/pub/mariadb/repo/10.0/ubuntu precise main

Then, update the package lists:

$ sudo apt-get update

 

2. On the ClusterControl node, stop the Galera cluster by using the s9s_galera command with its respective cluster ID. This command will shutdown Galera nodes one at a time. You can determine the cluster ID from the ClusterControl UI summary bar:

$ sudo s9s_galera --stop-cluster -i1

 

3. Remove the existing MariaDB and Galera packages:

$ sudo apt-get remove mariadb-galera-server-5.5 mariadb-client-5.5 galera

 

4. Comment the following line inside /etc/mysql/my.cnf:

[MYSQLD] 
#engine_condition_pushdown=1 
#wsrep_provider=/usr/lib/galera/libgalera_smm.so

And append the following options recommended for MariaDB 10.x:

[MYSQLD] 
wsrep_sst_method = xtrabackup-v2 
log_error = /var/log/mysql.log 
wsrep_provider=none 

[MYSQLD_SAFE] 
log_error = /var/log/mysql.log

** In MariaDB 10, it is recommended to use xtrabackup-v2 as the SST method and change the log_error outside of the MySQL datadir. This is because Xtrabackup would wipe off the MySQL datadir in case of SST.

 

4. Install MariaDB 10.x related packages:

$ sudo LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -y install mariadb-galera-server galera

For garbd node, remove the existing garbd v2 and install garbd v3. Then, kill the garbd process. ClusterControl will recover the process and it should be started with the new version immediately:

$ sudo apt-get remove galera 
$ sudo LC_ALL=en_US.utf8 DEBIAN_FRONTEND=noninteractive apt-get -o Dpkg::Options::='--force-confnew' -y galera 
$ sudo killall -9 garbd

 

5. Execute mysql_upgrade command:

$ mysql_upgrade -u root -p

 

6. Kill the running MySQL process:

$ sudo service mysql stop

 

7. Uncomment the following line in /etc/mysql/my.cnf:

wsrep_provider=/usr/lib/galera/libgalera_smm.so

And remove or comment the following line:

#wsrep_provider=none

 

8. Once all database nodes are upgraded, on the ClusterControl node, start the Galera cluster and specify the last node that shut down (in this case is 192.168.50.103) as the reference/donor node:

$ sudo s9s_galera --start-cluster -i1 -d 192.168.50.103

 

Upgrade is now complete. Your Galera cluster will be recovered and available once you re-enable ClusterControl auto recovery feature as described in the last section.

 

Post-Upgrade

 

ClusterControl

 

Enable back ClusterControl auto recovery feature by removing/commenting the following line in CMON configuration file:

enable_autorecovery=0

Restart CMON service to apply the change.

 

In certain cases, the changes that we made manually to the MySQL configuration file might not appear in ClusterControl > Manage > Configuration. To reimport the latest configuration file from the Galera nodes, go to ClusterControl > Manage > Configurations > Reimport Configuration.  Wait for a moment before ClusterControl starts updating the detected database server version under ClusterControl > Settings > General Settings > Version. If the version is not updated after a few minutes, restart the CMON service to expedite this.

 

WAN Segments

 

MySQL 5.6/MariaDB 10 with Galera 3.x supports WAN segmentation. If you are running on WAN, you can take advantage of this feature by assigning the same segment ID to nodes located in the same data center. Append the following to your MySQL configuration:

wsrep_provider_options="gmcast.segment=2"

 

* If you have configuration options under wsrep_provider_options eg: wsrep_provider_options="gcache.size=128M", you need to append it to the line separated by a semi-colon:

wsrep_provider_options="gcache.size=128M;gmcast.segment=2"

 

Welcome to MySQL 5.6/MariaDB 10!

 

Blog category:

Tags:

How to Bootstrap MySQL/MariaDB Galera Cluster

$
0
0

Unlike standard MySQL server and MySQL Cluster, the way to start a MySQL/MariaDB Galera Cluster is a bit different. Galera requires you to start a node in a cluster as a reference point, before the remaining nodes are able to join and form the cluster. This process is known as cluster bootstrap. Bootstrapping is an initial step to introduce a database node as primary component, before others see it as a reference point to sync up data.

 

How does it work?

 

When Galera starts with the bootstrap command on a node, that particular node will reach Primary state (check the value of wsrep_cluster_status). The remaining nodes will just require a normal start command and they will automatically look for existing Primary Component (PC) in the cluster and join to form a cluster. Data synchronization then happens through either incremental state transfer (IST) or snapshot state transfer (SST) between the joiner and the donor.

 

So basically, you should only bootstrap the cluster if you want to start a new cluster or when no other nodes in the cluster is in PRIMARY state. Care should be taken when choosing the action to take, or else you might end up with split clusters or loss of data.

 

The following example scenarios illustrate when to bootstrap the cluster:

 

How to start Galera cluster?

 

The 3 Galera vendors use different bootstrapping commands (based on the software’s latest version). On the first node, run:

  • Codership:
    $ service mysql bootstrap
  • Percona XtraDB Cluster:
    $ service mysql bootstrap-pxc
  • MariaDB Galera Cluster:
    $ service mysql start --wsrep-new-cluster

 

The above command is just a wrapper and what it actually does is to start the MySQL instance on that node with gcomm:// as the wsrep_cluster_address variable. You can also manually define the variables inside my.cnf and run the standard start/restart command. However, do not forget to change wsrep_cluster_address back again to contain the addresses to all nodes after the start.

 

When the first node is live, run the following command on the subsequent nodes:

$ service mysql start

 

The new node connects to the cluster members as defined by the wsrep_cluster_address parameter. It will now automatically retrieve the cluster map and connect to the rest of the nodes and form a cluster.

 

Warning: Never bootstrap when you want to reconnect a node to an existing cluster, and NEVER run bootstrap on more than one node.

 

What if the nodes have diverged?

 

In certain circumstances, nodes can be diverged from each other. The state of all nodes might turn into non-Primary due to network split between nodes, cluster crash, or if Galera hit an exception when determining the Primary Component. You will then need to select a node and promote it to be a Primary Component.

 

To determine which node needs to be bootstrapped, compare the wsrep_last_committed value on all DB nodes:

node1>SHOWSTATUSLIKE'wsrep_%';+----------------------+-------------+| Variable_name        |Value|+----------------------+-------------+| wsrep_last_committed |10032|
...
| wsrep_cluster_status | non-Primary |+----------------------+-------------+ 
node2>SHOWSTATUSLIKE'wsrep_%';+----------------------+-------------+| Variable_name        |Value|+----------------------+-------------+| wsrep_last_committed |10348|
...
| wsrep_cluster_status | non-Primary |+----------------------+-------------+ 
node3>SHOWSTATUSLIKE'wsrep_%';+----------------------+-------------+| Variable_name        |Value|+----------------------+-------------+| wsrep_last_committed |997|
...
| wsrep_cluster_status | non-Primary |+----------------------+-------------+

 

From above outputs, node2 has the most up-to-date data. In this case, all Galera nodes are already started, so you don’t necessarily need to bootstrap the cluster again. We just need to promote node2 to be a Primary Component:

node2>SETGLOBAL wsrep_provider_options="pc.bootstrap=1";

 

The remaining nodes will then reconnect to the Primary Component (node2) and resyncing back data based on this node.

 

If you are using ClusterControl, you can retrieve the wsrep_last_committed and wsrep_cluster_status values directly from the ClusterControl > Overview page:

Or from ClusterControl > Performance > DB Status page:

 

Blog category:


Deploy an asynchronous slave to Galera Cluster for MySQL - The Easy Way

$
0
0

Due to its synchronous nature, Galera performance can be limited by the slowest node in the cluster. So running heavy reporting queries or making frequent backups on one node, or putting a node across a slow WAN link to a remote data center might indirectly affect cluster performance. Combining Galera and asynchronous MySQL replication in the same setup, aka Hybrid Replication, can help. A slave is loosely coupled to the cluster, and will be able to handle a heavy load without affecting the performance of the cluster. The slave can also be a live copy of the Galera cluster for disaster recovery purposes.

We had explained the different steps to set this up in a previous post. With ClusterControl 1.2.9 though, this can be automated via the web interface. A slave can be bootstrapped with a Percona XtraBackup stream from a chosen Galera master node. In case of master failure, the slave can be failed over to replicate from another Galera node. Note that master failover is available if you are using Percona XtraDB Cluster or the Codership build of Galera Cluster with GTID. If you are using MariaDB, ClusterControl supports adding a slave but not performing master failover. 

 

Preparing the Master (Galera Cluster)

MySQL replication slave requires at least a master with GTID enabled on the Galera nodes. However, we would recommend users to configure all Galera nodes as master for better failover. GTID is required as it is used to do master failover. If you are running on MySQL 5.5, you might need to upgrade to MySQL 5.6

The following must be true for the masters:

  • At least one master among the Galera nodes
  • MySQL GTID must be enabled
  • log_slave_updates must be enabled
  • Master’s MySQL port is accessible by ClusterControl and slaves

To configure a Galera node as master, change the MySQL configuration file for that node as per below:

server_id=<must be unique across all mysql servers participating in replication>
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
gtid_mode=ON
enforce_gtid_consistency=1

 

Preparing the Slave

For the slave, you would need a separate host or VM, with or without MySQL installed. If you do not have a MySQL installed, and choose ClusterControl to install the MySQL on the slave, ClusterControl will perform the necessary actions to prepare the slave, for example, configure root password (based on monitored_mysql_root_password), create slave user (based on repl_user, repl_password), configure MySQL, start the server and also start replication. The MySQL package used will be based on the Galera vendor used, for example, if you are running Percona XtraDB Cluster, ClusterControl will prepare the slave using Percona Server.

In short, we must perform following actions beforehand:

  • The slave node must be accessible using passwordless SSH from the ClusterControl server
  • MySQL port (default 3306) and netcat port 9999 on the slave are open for connections.
  • You must configure the following options in the ClusterControl configuration file for the respective cluster ID under /etc/cmon.cnf or /etc/cmon.d/cmon_<cluster ID>.cnf:
    • repl_user=<the replication user>
    • repl_password=<password for replication user>
    • monitored_mysql_root_password=<the mysql root password of all nodes including slave>
  • The slave configuration template file must be configured beforehand, and must have at least the following variables defined in the MySQL configuration template:
    • server_id
    • basedir
    • datadir

To prepare the MySQL configuration file for the slave, go to ClusterControl > Manage > Configurations > Template Configuration files > edit my.cnf.slave and add the following lines:

[mysqld]
bind-address=0.0.0.0
gtid_mode=ON
log_bin=binlog
log_slave_updates=1
enforce_gtid_consistency=ON
expire_logs_days=7
server_id=1001
binlog_format=ROW
slave_net_timeout=60
basedir=/usr
datadir=/var/lib/mysql

 

Attaching a Slave via ClusterControl

Let’s now add our slave using ClusterControl. The architecture looks like this:

Our example cluster is running MySQL Galera Cluster (Codership). The same steps apply for Percona XtraDB Cluster, although MariaDB 10 has minor differences in step #1 and #6.

1. Configure Galera nodes as master. Go to ClusterControl > Manage > Configurations, and click Edit/View on each configuration files and append the following lines under mysqld directive:
galera1:

server_id=101
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

galera2:

server_id=102
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

galera3:

server_id=103
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

2. Perform a rolling restart from ClusterControl > Manage > Upgrades > Rolling Restart. Optionally, you can restart one node at a time under ClusterControl > Nodes > select the corresponding node > Shutdown > Execute, and then start it again.

3. You should see that ClusterControl detects the newly configured master nodes, as per the screenshot below:

4. On the ClusterControl node, setup passwordless SSH to the slave node:

$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.50.111

5. Then, ensure the following lines exist in the corresponding cmon.cnf or cmon_<cluster ID>.cnf:

repl_user=slave
repl_password=slavepassword123
monitored_mysql_root_password=myr00tP4ssword

Restart CMON daemon to apply the changes:

$ service cmon restart

6. Go to ClusterControl > Manage > Configurations > Create New Template or Edit/View existing template, and the add following lines:

7. Now, we are ready to add the slave. Go to ClusterControl > Cluster Actions > Add Replication Slave. Choose a master and the configuration file as per the example below:

Click on Proceed. A job will be triggered and you can monitor the progress at ClusterControl > Logs > Jobs. Once the process is complete, the slave will show up in your Overview page as highlighted in the following screenshot:

You would notice there are 4 green tick icons for master. This is because we have configured our slave to produce a binlog, which is required for GTID. Thus, the node is capable to become a master for another slave.

 

Failover and Recovery

To perform failover in case the designated master goes down, just go to Nodes > select the slave node > Failover Replication Slave > Execute and choose a new master similar to the screenshot below:

You can also stage the slave with data from the master by going to Nodes > select the slave node > Stage Replication Slave > Execute to re-initialize the slave:

This process will stop the slave instance, remove the datadir content, stream a backup from the master using Percona Xtrabackup, and then restart the slave. This can take some time depending on your database size, network and IO.

That’s it folks!

Blog category:

Automation & Management of Galera Clusters for MySQL, MariaDB & Percona XtraDB: New 1-Day Online Training Course

$
0
0

Galera Cluster For System Administrators, DBAs And DevOps

Galera Cluster for MySQL, MariaDB and Percona XtraDB involves more effort and resource to administer than standalone systems. If you would like to learn how to best deploy, monitor, manage and scale your database cluster(s), then this new online training course is for you!

The course is designed for system administrators & database administrators looking to gain more in depth expertise in the automation and management of Galera Clusters.

What: A one-day, instructor-led, Galera Cluster management training course

When: The first training course will take place on June 12th 2015 - European time zone
Please register your interest also if you’re outside of that time zone, as we will be scheduling further dates/courses

Where: In a virtual classroom as well as a virtual lab for hands-on lab exercises

How:Reserve your seat online and we will contact you back with all the relevant details

Who: The training is delivered by Severalnines& BOS-it GmbH 

  • You will learn about:
  • Galera Cluster, system architecture & multi-data centre setups
  • Automated deployment & node / cluster recovery
  • How to best migrate data into Galera Cluster
  • Monitoring & troubleshooting basics
  • Load balancing and cluster management techniques

GaleraCluster_logo.png

This course is all about hands-on lab exercises! Learn from the experts without having to leave your home or office!

High availability cluster configurations tend to be complex, but once designed, they tend to be duplicated many times with minimal variation. Automation can be applied to provisioning, upgrading, patching and scaling. DBAs and Sysadmins can then focus on more critical tasks, such as performance tuning, query design, data modeling or providing architectural advice to application developers. A well managed system can mitigate operational risk, that can result in significant savings and reduced downtime. 

To learn how to best deploy, monitor, manage and scale Galera Cluster, click here for more information and to sign up. 

The number of seats is limited, so make sure you register soon!

severalnines_logo.jpgScreen Shot 2015-04-10 at 10.32.24.png

 

Or why not talk to us directly if you’re at Percona Live: MySQL Conference & Expo 2015 next week?

We’ll be at booth number 417, the one with the balloons and the S9s t-shirts, so come and grab a t-shirt as stocks last! And of course, we’ll be happy to talk about database clustering, ClusterControl and our new training course …

Note that we’ll be giving away one seat of our new training course (to the value of €750) at the conference as part of this year’s passport programme; so make sure to get your passport stamped at our booth!

Screen Shot 2015-04-10 at 11.35.56.png

 

Blog category:

Webinar Replay & Slides: How to build scalable database infrastructures with MariaDB & HAProxy

$
0
0

Thanks to everyone who participated in last week’s live webinar on how CloudStats.me moved from MySQL to clustered MariaDB for high availability with Severalnines ClusterControl. The webinar included use case discussions on cloudstats.me’s database infrastructure bundled with a live demonstration of ClusterControl to illustrate the key elements that were discussed.

We had a lot of questions in the audience and you can read through the transcript of these further below in this blog.

If you missed the session and/or would like to watch the replay in your own time, it is now available online for sign up and viewing.

Replay Details

Get access to the replay

Agenda

  • CloudStats.me infrastructure overview
  • Database challenges
  • Limitations in cloud-based infrastructure
  • Scaling MySQL - many options
    • MySQL Cluster, Master-Slave Replication, Sharding, ...
  • Availability and failover
  • Application sharding vs auto-sharding
  • Migration to MariaDB / Galera Cluster with ClusterControl & NoSQL
  • Load Balancing with HAProxy & MaxScale
  • Infrastructure set up provided to CloudStats.me
    • Private Network, Cluster Nodes, H/W SSD Raid + BBU
  • What we learnt - “Know your data!”

Speakers

Andrey Vasilyev, CTO of Aqua Networks Limited - a London-based company which owns brands, such as WooServers.com, CloudStats.me and CloudLayar.com, and Art van Scheppingen, Senior Support Engineer at Severalnines, discussed the challenges encountered by CloudStats.me in achieving database high availability and performance, as well as the solutions that were implemented to overcome these challenges.

If you have any questions or would like a personalised live demo, please do contact us.

Follow our technical blogs: http://severalnines.com/blog


Questions & Answers - Transcript

Maybe my question is not directly related to the topic of the webinar... But will your company (I mean Severalnines) in the future also consider the possibility to install and setup Pivotal's Greenplum database?
Currently, there are no plans for us that, as we have not received requests to support Greenplum yet. But it’s something we’ll keep in mind!

What about Spider and ClusterControl? Is this combination available / being used?
Spider can be used independently of ClusterControl, since ClusterControl can be used to manage the individual MySQL instances. We are not aware of any ClusterControl users who are using Spider.

Is MySQL Cluster NDB much faster than a Galera Cluster?
MySQL NDB and Galera Cluster are two different types of clustering. The main difference is that in Galera Cluster all nodes are equal and contain the same dataset, while with NDB Cluster the data nodes contain sharded/mirrored data sets. NDB Cluster can handle larger data sets to write, but if you need multiple equal MySQL master nodes Galera is a better choice. Galera is also faster in replicating data than a traditional MySQL replication due to the ability to write all queries in parallel.

Does CloudStats also support database backups on the end user level?
CloudStats can backup your files to S3, Azure, locally etc., but for database backup, it’s best to use ClusterControl, while CloudStats is for the rest of files.

Is it possible to restore the structure and the whole setup of a previous ClusterControl infrastructure from the backups?
Yes, that would be possible, if you make backups of your existing ClusterControl database and configuration files.

I'm using Maxscale with Galera. The Read/Write Split modules drop the connection on very intensive operations involving reading and then writing of rows up to 80,000, but works fine with readconnroute module (which doesn't split). Anyway I can scale writes involving just Galera?
You could create two readconnroute interfaces using MaxScale and use one for writes only. You can do this by adding router_options=master to the configuration and with a Galera cluster this will only write to one single node in the Galera cluster.

Cluster is fast as the slowest node? like NODE1-SSD, NODE2-SSD, NODE3-SATA...
Yes, within Galera Cluster your slowest node will determine the speed of the whole cluster

Galera cluster is INNODB only. If it is. Is it recommended not to use MyISAM?
In principle Galera is InnoDB only, however there is limited support for MyISAM if you encapsulate your queries in a transaction. As there is no guarantee the data will be kept equal on all nodes due to MyISAM not being a transactional storage engine and this could cause data drift to happen. Using MyISAM with Galera is not advised to do.

Virtualized nodes should then be on SSD host storage. Not network storage because IOPS will be low. Correct?
Yes, that's correct, its best to store it on a local ssd.

MySQLdump is slow right?
MySQLdump is dumping the entire contents of your database as a logical backup and therefore slower than Xtrabackup.

HAProxy instances are installed on 2x cluster control servers?
HAProxy instances are usually installed on dedicated hosts, not on the CC node.

What about MySQL proxy, and use cases with that tool? And would it be better to just split query R/W at application level?
MySQL proxy can be used, but the tool is not maintained anymore and we found HAProxy and MaxScale are better options.

HAProxy can run custom scripts and these statuses can also be manually created right?
Exactly, you can do that. ClusterControl just has a few preset checks by default but you can change them if you like

In your experience, how does EC2 perform with a MariaDB based cluster?
According to our Benchmarks, EC2 M3.Xlarge instances showed a Read Output performance of 16914 and Write Input Performance of 31092, which is 2 times higher than a similar sized Microsoft Azure DS3 instance (16329 iops for Reads and 15900 iops for Writes). So yes, according to our test AWS might perform better than Azure for Write performance, but it will depend on your application size and requirements. A local SSD storage on a server might be recommended for higher iops performance.

Blue - Reads
Red  - Writes

Does WooServers offer PCI DSS compliant servers?
Yes, WooServers offer PCI DSS servers and are able to manage your current infrastructure that you have, either on Azure, on premises or AWS.

AAA pluggable / scriptable? A customer came up with Radius recently…
Unfortunately the authentication/authorization is limited to either ClusterControl internal AAA or LDAP only.

Also: GUI functions accessible via JSON/HTTP API ?
Yes, our most important GUI functions are available through our RPC api. So you would be able to automate deploying, backups and scaling easily.

Blog category:

Severalnines Launches #MySQLHA CrowdChat

$
0
0

Today we launch our live CrowdChat on everything #MySQLHA!

This CrowdChat is brought to you by Severalnines and is hosted by a community of subject matter experts. CrowdChat is a community platform that works across Facebook, Twitter, and LinkedIn to allow users to discuss a topic using a specific #hashtag. This crowdchat focuses on the hashtag #MySQLHA. So if you’re a DBA, architect, CTO, or a database novice register to join and become part of the conversation!

Join this online community to interact with experts on Galera clusters. Get your questions answered and join the conversation around everything #MySQLHA.

Register free

Meet the experts

Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.

Krzysztof Książek is a Senior Support Engineer at Severalnines, is a MySQL DBA with experience managing complex database environments for companies like Zendesk, Chegg, Pinterest and Flipboard.

Ashraf Sharif is a System Support Engineer at Severalnines. He has previously worked as principal consultant and head of support team and delivered clustering solutions for big websites in the South East Asia region. His professional interests focus on system scalability and high availability.

Vinay Joosery is a passionate advocate and builder of concepts and businesses around Big Data computing infrastructures. Prior to co-founding Severalnines, Vinay held the post of Vice-President EMEA at Pentaho Corporation - the Open Source BI leader. He has also held senior management roles at MySQL / Sun Microsystems / Oracle, where he headed the Global MySQL Telecoms Unit, and built the business around MySQL's High Availability and Clustering product lines. Prior to that, Vinay served as Director of Sales & Marketing at Ericsson Alzato, an Ericsson-owned venture focused on large scale real-time databases.

MySQL on Docker: Introduction to Docker Swarm Mode and Multi-Host Networking

$
0
0

In the previous blog post, we looked into Docker’s single-host networking for MySQL containers. This time, we are going to look into the basics of multi-host networking and Docker swarm mode, a built-in orchestration tool to manage containers across multiple hosts.

Docker Engine - Swarm Mode

Running MySQL containers on multiple hosts can get a bit more complex depending on the clustering technology you choose.

Before we try to run MySQL on containers + multi-host networking, we have to understand how the image works, how much resources to allocate (disk,memory,CPU), networking (the overlay network drivers - default, flannel, weave, etc) and fault tolerance (how is the container relocated, failed over and load balanced). Because all these will impact the overall operations, uptime and performance of the database. It is recommended to use an orchestration tool to get more manageability and scalability on top of your Docker engine cluster. The latest Docker Engine (version 1.12, released on July 14th, 2016) includes swarm mode for natively managing a cluster of Docker Engines called a Swarm. Take note that Docker Engine Swarm mode and Docker Swarm are two different projects, with different installation steps despite they both work in a similar way.

Some of the noteworthy parts that you should know before entering the swarm world:

  • The following ports must be opened:
    • 2377 (TCP) - Cluster management
    • 7946 (TCP and UDP) - Nodes communication
    • 4789 (TCP and UDP) - Overlay network traffic
  • There are 2 types of nodes:
    • Manager - Manager nodes perform the orchestration and cluster management functions required to maintain the desired state of the swarm. Manager nodes elect a single leader to conduct orchestration tasks.
    • Worker - Worker nodes receive and execute tasks dispatched from manager nodes. By default, manager nodes are also worker nodes, but you can configure managers to be manager-only nodes.

More details in the Docker Engine Swarm documentation.

In this blog, we are going to deploy application containers on top of a load-balanced Galera Cluster on 3 Docker hosts (docker1, docker2 and docker3), connected through an overlay network. We will use Docker Engine Swarm mode as the orchestration tool.

“Swarming” Up

Let’s cluster our Docker nodes into a Swarm. Swarm mode requires an odd number of managers (obviously more than one) to maintain quorum for fault tolerance. So, we are going to use all the physical hosts as manager nodes. Note that by default, manager nodes are also worker nodes.

  1. Firstly, initialize Swarm mode on docker1. This will make the node as manager and leader:

    [root@docker1]$ docker swarm init --advertise-addr 192.168.55.111
    Swarm initialized: current node (6r22rd71wi59ejaeh7gmq3rge) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join \
        --token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-dzvgu0h3qngfgihz4fv0855bo \
        192.168.55.111:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
  2. We are going to add two more nodes as manager. Generate the join command for other nodes to register as manager:

    [docker1]$ docker swarm join-token manager
    To add a manager to this swarm, run the following command:
    
        docker swarm join \
        --token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-7fd1an5iucy4poa4g1bnav0pt \
        192.168.55.111:2377
  3. On docker2 and docker3, run the following command to register the node:

    $ docker swarm join \
        --token SWMTKN-1-16kit6dksvrqilgptjg5pvu0tvo5qfs8uczjq458lf9mul41hc-7fd1an5iucy4poa4g1bnav0pt \
        192.168.55.111:2377
  4. Verify if all nodes are added correctly:

    [docker1]$ docker node ls
    ID                           HOSTNAME       STATUS  AVAILABILITY  MANAGER STATUS
    5w9kycb046p9aj6yk8l365esh    docker3.local  Ready   Active        Reachable
    6r22rd71wi59ejaeh7gmq3rge *  docker1.local  Ready   Active        Leader
    awlh9cduvbdo58znra7uyuq1n    docker2.local  Ready   Active        Reachable

    At the moment, we have docker1.local as the leader. 

Overlay Network

The only way to let containers running on different hosts connect to each other is by using an overlay network. It can be thought of as a container network that is built on top of another network (in this case, the physical hosts network). Docker Swarm mode comes with a default overlay network which implements a VxLAN-based solution with the help of libnetwork and libkv. You can however choose another overlay network driver like Flannel, Calico or Weave, where extra installation steps are necessary. We are going to cover more on that later in an upcoming blog post.

In Docker Engine Swarm mode, you can create an overlay network only from a manager node and it doesn’t need an external key-value store like etcd, consul or Zookeeper.

The swarm makes the overlay network available only to nodes in the swarm that require it for a service. When you create a service that uses an overlay network, the manager node automatically extends the overlay network to nodes that run service tasks.

Let’s create an overlay network for our containers. We are going to deploy Percona XtraDB Cluster and application containers on separate Docker hosts to achieve fault tolerance. These containers must be running on the same overlay network so they can communicate with each other.

We are going to name our network “mynet”. You can only create this on the manager node:

[docker1]$ docker network create --driver overlay mynet

Let’s see what networks we have now:

[docker1]$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
213ec94de6c9        bridge              bridge              local
bac2a639e835        docker_gwbridge     bridge              local
5b3ba00f72c7        host                host                local
03wvlqw41e9g        ingress             overlay             swarm
9iy6k0gqs35b        mynet               overlay             swarm
12835e9e75b9        none                null                local

There are now 2 overlay networks with a Swarm scope. The “mynet” network is what we are going to use today when deploying our containers. The ingress overlay network comes by default. The swarm manager uses ingress load balancing to expose the services you want externally to the swarm.

Deployment using Services and Tasks

We are going to deploy the Galera Cluster containers through services and tasks. When you create a service, you specify which container image to use and which commands to execute inside running containers. There are two type of services:

  • Replicated services - Distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state, for examples “--replicas 3”.
  • Global services - One task for the service on every available node in the cluster, for example “--mode global”. If you have 7 Docker nodes in the Swarm, there will be one container on each of them.

Docker Swarm mode has a limitation in managing persistent data storage. When a node fails, the manager will get rid of the containers and create new containers in place of the old ones to meet the desired replica state. Since a container is discarded when it goes down, we would lose the corresponding data volume as well. Fortunately for Galera Cluster, the MySQL container can be automatically provisioned with state/data when joining.

Deploying Key-Value Store

The docker image that we are going to use is from Percona-Lab. This image requires the MySQL containers to access a key-value store (supports etcd only) for IP address discovery during cluster initialization and bootstrap. The containers will look for other IP addresses in etcd, if there are any, start the MySQL with a proper wsrep_cluster_address. Otherwise, the first container will start with the bootstrap address, gcomm://.

  1. Let’s deploy our etcd service. We will use etcd image available here. It requires us to have a discovery URL on the number of etcd node that we are going to deploy. In this case, we are going to setup a standalone etcd container, so the command is:

    [docker1]$ curl -w "\n"'https://discovery.etcd.io/new?size=1'
    https://discovery.etcd.io/a293d6cc552a66e68f4b5e52ef163d68
  2. Then, use the generated URL as “-discovery” value when creating the service for etcd:

    [docker1]$ docker service create \
    --name etcd \
    --replicas 1 \
    --network mynet \
    -p 2379:2379 \
    -p 2380:2380 \
    -p 4001:4001 \
    -p 7001:7001 \
    elcolio/etcd:latest \
    -name etcd \
    -discovery=https://discovery.etcd.io/a293d6cc552a66e68f4b5e52ef163d68

    At this point, Docker swarm mode will orchestrate the deployment of the container on one of the Docker hosts.

  3. Retrieve the etcd service virtual IP address. We are going to use that in the next step when deploying the cluster:

    [docker1]$ docker service inspect etcd -f "{{ .Endpoint.VirtualIPs }}"
    [{03wvlqw41e9go8li34z2u1t4p 10.255.0.5/16} {9iy6k0gqs35bn541pr31mly59 10.0.0.2/24}]

    At this point, our architecture looks like this:

Deploying Database Cluster

  1. Specify the virtual IP address for etcd in the following command to deploy Galera (Percona XtraDB Cluster) containers:

    [docker1]$ docker service create \
    --name mysql-galera \
    --replicas 3 \
    -p 3306:3306 \
    --network mynet \
    --env MYSQL_ROOT_PASSWORD=mypassword \
    --env DISCOVERY_SERVICE=10.0.0.2:2379 \
    --env XTRABACKUP_PASSWORD=mypassword \
    --env CLUSTER_NAME=galera \
    perconalab/percona-xtradb-cluster:5.6
  2. It takes some time for the deployment where the image will be downloaded on the assigned worker/manager node. You can verify the status with the following command:

    [docker1]$ docker service ps mysql-galera
    ID                         NAME                IMAGE                                  NODE           DESIRED STATE  CURRENT STATE            ERROR
    8wbyzwr2x5buxrhslvrlp2uy7  mysql-galera.1      perconalab/percona-xtradb-cluster:5.6  docker1.local  Running        Running 3 minutes ago
    0xhddwx5jzgw8fxrpj2lhcqeq  mysql-galera.2      perconalab/percona-xtradb-cluster:5.6  docker3.local  Running        Running 2 minutes ago
    f2ma6enkb8xi26f9mo06oj2fh  mysql-galera.3      perconalab/percona-xtradb-cluster:5.6  docker2.local  Running        Running 2 minutes ago
  3. We can see that the mysql-galera service is now running. Let’s list out all services we have now:

    [docker1]$ docker service ls
    ID            NAME          REPLICAS  IMAGE                                  COMMAND
    1m9ygovv9zui  mysql-galera  3/3       perconalab/percona-xtradb-cluster:5.6
    au1w5qkez9d4  etcd          1/1       elcolio/etcd:latest                    -name etcd -discovery=https://discovery.etcd.io/a293d6cc552a66e68f4b5e52ef163d68
  4. Swarm mode has an internal DNS component that automatically assigns each service in the swarm a DNS entry. So you use the service name to resolve to the virtual IP address:

    [docker2]$ docker exec -it $(docker ps | grep etcd | awk {'print $1'}) ping mysql-galera
    PING mysql-galera (10.0.0.4): 56 data bytes
    64 bytes from 10.0.0.4: seq=0 ttl=64 time=0.078 ms
    64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.179 ms

    Or, retrieve the virtual IP address through the “docker service inspect” command:

    [docker1]# docker service inspect mysql-galera -f "{{ .Endpoint.VirtualIPs }}"
    [{03wvlqw41e9go8li34z2u1t4p 10.255.0.7/16} {9iy6k0gqs35bn541pr31mly59 10.0.0.4/24}]

    Our architecture now can be illustrated as below:

Deploying Applications

Finally, you can create the application service and pass the MySQL service name (mysql-galera) as the database host value:

[docker1]$ docker service create \
--name wordpress \
--replicas 2 \
-p 80:80 \
--network mynet \
--env WORDPRESS_DB_HOST=mysql-galera \
--env WORDPRESS_DB_USER=root \
--env WORDPRESS_DB_PASSWORD=mypassword \
wordpress

Once deployed, we can then retrieve the virtual IP address for wordpress service through the “docker service inspect” command:

[docker1]# docker service inspect wordpress -f "{{ .Endpoint.VirtualIPs }}"
[{p3wvtyw12e9ro8jz34t9u1t4w 10.255.0.11/16} {kpv8e0fqs95by541pr31jly48 10.0.0.8/24}]

At this point, this is what we have:

Our distributed application and database setup is now deployed by Docker containers.

Connecting to the Services and Load Balancing

At this point, the following ports are published (based on the -p flag on each “docker service create” command) on all Docker nodes in the cluster, whether or not the node is currently running the task for the service:

  • etcd - 2380, 2379, 7001, 4001
  • MySQL - 3306
  • HTTP - 80

If we connect directly to the PublishedPort, with a simple loop, we can see that the MySQL service is load balanced among containers:

[docker1]$ while true; do mysql -uroot -pmypassword -h127.0.0.1 -P3306 -NBe 'select @@wsrep_node_address'; sleep 1; done
10.255.0.10
10.255.0.8
10.255.0.9
10.255.0.10
10.255.0.8
10.255.0.9
10.255.0.10
10.255.0.8
10.255.0.9
10.255.0.10
^C

At the moment, Swarm manager manages the load balancing internally and there is no way to configure the load balancing algorithm. We can then use external load balancers to route outside traffic to these Docker nodes. In case of any of the Docker nodes goes down, the service will be relocated to the other available nodes.

That’s all for now. In the next blog post, we’ll take a deeper look at Docker overlay network drivers for MySQL containers.

How to set up read-write split in Galera Cluster using ProxySQL

$
0
0

Edited on Sep 12, 2016 to correct the description of how ProxySQL handles session variables. Many thanks to Francisco Miguel for pointing this out.


ProxySQL is becoming more and more popular as SQL-aware load balancer for MySQL and MariaDB. In previous blog posts, we covered installation of ProxySQL and its configuration in a MySQL replication environment. We’ve covered how to set up ProxySQL to perform failovers executed from ClusterControl. At that time, Galera support in ProxySQL was a bit limited - you could configure Galera Cluster and split traffic across all nodes but there was no easy way to implement read-write split of your traffic. The only way to do that was to create a daemon which would monitor Galera state and update weights of backend servers defined in ProxySQL - a much more complex task than to write a small bash script.

In one of the recent ProxySQL releases, a very important feature was added - a scheduler, which allows to execute external scripts from within ProxySQL even as often as every millisecond (well, as long as your script can execute within this time frame). This feature creates an opportunity to extend ProxySQL and implement setups which were not possible to build easily in the past due to too low granularity of the cron schedule. In this blog post, we will show you how to take advantage of this new feature and create a Galera Cluster with read-write split performed by ProxySQL.

First, we need to install and start ProxySQL:

[root@ip-172-30-4-215 ~]# wget https://github.com/sysown/proxysql/releases/download/v1.2.1/proxysql-1.2.1-1-centos7.x86_64.rpm

[root@ip-172-30-4-215 ~]# rpm -i proxysql-1.2.1-1-centos7.x86_64.rpm
[root@ip-172-30-4-215 ~]# service proxysql start
Starting ProxySQL: DONE!

Next, we need to download a script which we will use to monitor Galera status. Currently it has to be downloaded separately but in the next release of ProxySQL it should be included in the rpm. The script needs to be located in /var/lib/proxysql.

[root@ip-172-30-4-215 ~]# wget https://raw.githubusercontent.com/sysown/proxysql/master/tools/proxysql_galera_checker.sh

[root@ip-172-30-4-215 ~]# mv proxysql_galera_checker.sh /var/lib/proxysql/
[root@ip-172-30-4-215 ~]# chmod u+x /var/lib/proxysql/proxysql_galera_checker.sh

If you are not familiar with this script, you can check what arguments it accepts by running:

[root@ip-172-30-4-215 ~]# /var/lib/proxysql/proxysql_galera_checker.sh
Usage: /var/lib/proxysql/proxysql_galera_checker.sh <hostgroup_id write> [hostgroup_id read] [number writers] [writers are readers 0|1} [log_file]

As we can see, we need to pass couple of arguments - hostgroups for writers and readers, number of writers which should be active at the same time. We also need to pass information if writers can be used as readers and, finally, path to a log file.

Next, we need to connect to ProxySQL’s admin interface. For that you need to know credentials - you can find them in a configuration file, typically located in /etc/proxysql.cnf:

admin_variables=
{
        admin_credentials="admin:admin"
        mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
#       refresh_interval=2000
#       debug=true
}

Knowing the credentials and interfaces on which ProxySQL listens, we can connect to the admin interface and begin configuration.

[root@ip-172-30-4-215 ~]# mysql -P6032 -uadmin -padmin -h 127.0.0.1

First, we need to fill mysql_servers table with information about our Galera nodes. We will add them twice, to two different hostgroups. One hostgroup (with hostgroup_id of 0) will handle writes while the second hostgroup (with hostgroup_id of 1) will handle reads.

MySQL [(none)]> INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (0, '172.30.4.238', 3306), (0, '172.30.4.184', 3306), (0, '172.30.4.67', 3306);
Query OK, 3 rows affected (0.00 sec)

MySQL [(none)]> INSERT INTO mysql_servers (hostgroup_id, hostname, port) VALUES (1, '172.30.4.238', 3306), (1, '172.30.4.184', 3306), (1, '172.30.4.67', 3306);
Query OK, 3 rows affected (0.00 sec)

Next, we need to add information about users which will be used by the application. We used a plain text password here but ProxySQL accepts also hashed passwords in MySQL format.

MySQL [(none)]> INSERT INTO mysql_users (username, password, active, default_hostgroup) VALUES ('sbtest', 'sbtest', 1, 0);
Query OK, 1 row affected (0.00 sec)

What’s important to keep in mind is the default_hostgroup setting - we set it to ‘0’ which means that, unless one of query rules say different, all queries will be sent to the hostgroup 0 - our writers.

At this point we need to define query rules which will handle read/write split. First, we want to match all SELECT queries:

MySQL [(none)]> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, apply) VALUES (1, '^SELECT.*', 1, 0);
Query OK, 1 row affected (0.00 sec)

It is important to make sure you get the regex correctly. It is also crucial to note that we set ‘apply’ column to ‘0’. This means that our rule won’t be the final one - a query, even if it matches the regex, will be tested against next rule in the chain. You can see why we’ve done that when you look at our second rule:

MySQL [(none)]> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, apply) VALUES (1, '^SELECT.*FOR UPDATE', 0, 1);
Query OK, 1 row affected (0.00 sec)

We are looking for SELECT … FOR UPDATE queries, that’s why we couldn’t just finish checking our SELECT queries on the first rule. SELECT … FOR UPDATE should be routed to our write hostgroup, where UPDATE will happen.

Those settings will work fine if autocommit is enabled and no explicit transactions are used. If your application uses transactions, one of the methods to make them work safely in ProxySQL is to use the following set of queries:

SET autocommit=0;
BEGIN;
...

The transaction is created and it will stick to the host where it was opened. You also need to have a query rule for BEGIN, which would route it to the hostgroup for writers - in our case we leverage the fact that, by default, all queries executed as ‘sbtest’ user are routed to writers’ hostgroup (‘0’) so there’s no need to add anything.

Second method would be to enable persistent transactions for our user (transaction_persistent column in mysql_users table should be set to ‘1’).

ProxySQL’s handling of other SET statements and user-defined variables is another thing we’d like to discuss a bit here. ProxySQL works on two levels of routing. First - query rules. You need to make sure all your queries are routed accordingly to your needs. Then, connection mutiplexing - even when routed to the same host, every query you issue may in fact use a different connection to the backend. This makes things hard for session variables. Luckily, ProxySQL treats all queries containing ‘@’ character in a special way - once it detects it, it disables connection multiplexing for the duration of that session - thanks to that, we don’t have to be worried that the next query won’t know a thing about our session variable.

The only thing we need to make sure of is that we end up in the correct hostgroup before disabling connection multiplexing. To cover all cases, the ideal hostgroup in our setup would be the one with writers. This would require slight change in the way we set our query rules (you may require to run ‘DELETE FROM mysql_query_rules’ if you already added the query rules we mentioned earlier).

MySQL [(none)]> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, apply) VALUES (1, '.*@.*', 0, 1);
Query OK, 1 row affected (0.00 sec)

MySQL [(none)]> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, apply) VALUES (1, '^SELECT.*', 1, 0);
Query OK, 1 row affected (0.00 sec)

MySQL [(none)]> INSERT INTO mysql_query_rules (active, match_pattern, destination_hostgroup, apply) VALUES (1, '^SELECT.*FOR UPDATE', 0, 1);
Query OK, 1 row affected (0.00 sec)

Those two cases could become a problem in our setup but as long as you are not affected by them (or if you used the proposed workarounds), we can proceed further with configuration. We still need to setup our script to be executed from ProxySQL:

MySQL [(none)]> INSERT INTO scheduler (id, active, interval_ms, filename, arg1, arg2, arg3, arg4, arg5) VALUES (1, 1, 1000, '/var/lib/proxysql/proxysql_galera_checker.sh', 0, 1, 1, 1, '/var/lib/proxysql/proxysql_galera_checker.log');
Query OK, 1 row affected (0.01 sec)

Additionally, because of the way how Galera handles dropped nodes, we want to increase the number of attempts that ProxySQL will make before it decides a host cannot be reached.

MySQL [(none)]> SET mysql-query_retries_on_failure=10;
Query OK, 1 row affected (0.00 sec)

Finally, we need to apply all changes we made to the runtime configuration and save them to disk.

MySQL [(none)]> LOAD MYSQL USERS TO RUNTIME; SAVE MYSQL USERS TO DISK; LOAD MYSQL QUERY RULES TO RUNTIME; SAVE MYSQL QUERY RULES TO DISK; LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; LOAD SCHEDULER TO RUNTIME; SAVE SCHEDULER TO DISK; LOAD MYSQL VARIABLES TO RUNTIME; SAVE MYSQL VARIABLES TO DISK;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.02 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.02 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.02 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 64 rows affected (0.05 sec)

Ok, let’s see how things work together. First, verify that our script works by looking at /var/lib/proxysql/proxysql_galera_checker.log:

Fri Sep  2 21:43:15 UTC 2016 Check server 0:172.30.4.184:3306 , status ONLINE , wsrep_local_state 4
Fri Sep  2 21:43:15 UTC 2016 Check server 0:172.30.4.238:3306 , status OFFLINE_SOFT , wsrep_local_state 4
Fri Sep  2 21:43:15 UTC 2016 Changing server 0:172.30.4.238:3306 to status ONLINE
Fri Sep  2 21:43:15 UTC 2016 Check server 0:172.30.4.67:3306 , status OFFLINE_SOFT , wsrep_local_state 4
Fri Sep  2 21:43:15 UTC 2016 Changing server 0:172.30.4.67:3306 to status ONLINE
Fri Sep  2 21:43:15 UTC 2016 Check server 1:172.30.4.184:3306 , status ONLINE , wsrep_local_state 4
Fri Sep  2 21:43:15 UTC 2016 Check server 1:172.30.4.238:3306 , status ONLINE , wsrep_local_state 4
Fri Sep  2 21:43:16 UTC 2016 Check server 1:172.30.4.67:3306 , status ONLINE , wsrep_local_state 4
Fri Sep  2 21:43:16 UTC 2016 Number of writers online: 3 : hostgroup: 0
Fri Sep  2 21:43:16 UTC 2016 Number of writers reached, disabling extra write server 0:172.30.4.238:3306 to status OFFLINE_SOFT
Fri Sep  2 21:43:16 UTC 2016 Number of writers reached, disabling extra write server 0:172.30.4.67:3306 to status OFFLINE_SOFT
Fri Sep  2 21:43:16 UTC 2016 Enabling config

Looks ok. Next we can check mysql_servers table:

MySQL [(none)]> select hostgroup_id, hostname, status from mysql_servers;
+--------------+--------------+--------------+
| hostgroup_id | hostname     | status       |
+--------------+--------------+--------------+
| 0            | 172.30.4.238 | OFFLINE_SOFT |
| 0            | 172.30.4.184 | ONLINE       |
| 0            | 172.30.4.67  | OFFLINE_SOFT |
| 1            | 172.30.4.238 | ONLINE       |
| 1            | 172.30.4.184 | ONLINE       |
| 1            | 172.30.4.67  | ONLINE       |
+--------------+--------------+--------------+
6 rows in set (0.00 sec)

Again, everything looks as expected - one host is taking writes (172.30.4.184), all three are handling reads. Let’s start sysbench to generate some traffic and then we can check how ProxySQL will handle failure of the writer host.

[root@ip-172-30-4-215 ~]# while true ; do sysbench --test=/root/sysbench/sysbench/tests/db/oltp.lua --num-threads=6 --max-requests=0 --max-time=0 --mysql-host=172.30.4.215 --mysql-user=sbtest --mysql-password=sbtest --mysql-port=6033 --oltp-tables-count=32 --report-interval=1 --oltp-skip-trx=on --oltp-read-only=off --oltp-table-size=100000  run ;done

We are going to simulate a crash by killing the mysqld process on host 172.30.4.184. This is what you’ll see on the application side:

[  45s] threads: 6, tps: 0.00, reads: 4891.00, writes: 1398.00, response time: 23.67ms (95%), errors: 0.00, reconnects:  0.00
[  46s] threads: 6, tps: 0.00, reads: 4973.00, writes: 1425.00, response time: 25.39ms (95%), errors: 0.00, reconnects:  0.00
[  47s] threads: 6, tps: 0.00, reads: 5057.99, writes: 1439.00, response time: 22.23ms (95%), errors: 0.00, reconnects:  0.00
[  48s] threads: 6, tps: 0.00, reads: 2743.96, writes: 774.99, response time: 23.26ms (95%), errors: 0.00, reconnects:  0.00
[  49s] threads: 6, tps: 0.00, reads: 0.00, writes: 1.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.00
[  50s] threads: 6, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.00
[  51s] threads: 6, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.00
[  52s] threads: 6, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.00
[  53s] threads: 6, tps: 0.00, reads: 0.00, writes: 0.00, response time: 0.00ms (95%), errors: 0.00, reconnects:  0.00
[  54s] threads: 6, tps: 0.00, reads: 1235.02, writes: 354.01, response time: 6134.76ms (95%), errors: 0.00, reconnects:  0.00
[  55s] threads: 6, tps: 0.00, reads: 5067.98, writes: 1459.00, response time: 24.95ms (95%), errors: 0.00, reconnects:  0.00
[  56s] threads: 6, tps: 0.00, reads: 5131.00, writes: 1458.00, response time: 22.07ms (95%), errors: 0.00, reconnects:  0.00
[  57s] threads: 6, tps: 0.00, reads: 4936.02, writes: 1414.00, response time: 22.37ms (95%), errors: 0.00, reconnects:  0.00
[  58s] threads: 6, tps: 0.00, reads: 4929.99, writes: 1404.00, response time: 24.79ms (95%), errors: 0.00, reconnects:  0.00

There’s a ~5 seconds break but otherwise, no error was reported. Of course, your mileage may vary - all depends on Galera settings and your application. Such feat might not be possible if you use transactions in your application.

To summarize, we showed you how to configure read-write split in Galera Cluster using ProxySQL. There are a couple of limitations due to the way the proxy works, but as long as none of them are a blocker, you can use it and benefit from other ProxySQL features like caching or query rewriting. Please also keep in mind that the script we used for setting up read-write split is just an example which comes from ProxySQL. If you’d like it to cover more complex cases, you can easily write one tailored to your needs.

Sign up for our new webinar: 9 DevOps Tips for Going in Production with Galera Cluster for MySQL / MariaDB

$
0
0

Galera Cluster for MySQL / MariaDB is easy to deploy, but how does it behave under real workload, scale, and during long term operation? Proof of concepts and lab tests usually work great for Galera, until it’s time to go into production. Throw in a live migration from an existing database setup and devops life just got a bit more interesting...

If this scenario sounds familiar, then this webinar is for you!

Operations is not so much about specific technologies, but about the techniques and tools you use to deploy and manage them. Monitoring, managing schema changes and pushing them in production, performance optimizations, configurations, version upgrades, backups; these are all aspects to consider – preferably before going live.

In this webinar, we’d like to guide you through 9 key tips to consider before taking Galera Cluster for MySQL / MariaDB into production.

Date & Time

Europe/MEA/APAC

Tuesday, October 11th at 09:00 BST (UK) / 10:00 CEST (Germany, France, Sweden)
Register Now

North America/LatAm

Tuesday, October 11th at 9:00 Pacific Time (US) / 12:00 Eastern Time (US)
Register Now

Agenda

  • 101 Sanity Check
  • Operating System
  • Backup Strategies
  • Replication & Sync
  • Query Performance
  • Schema Changes
  • Security / Encryption
  • Reporting
  • Managing from disaster

Speaker

Johan Andersson, CTO, Severalnines

Johan's technical background and interest are in high performance computing as demonstrated by the work he did on main-memory clustered databases at Ericsson as well as his research on parallel Java Virtual Machines at Trinity College Dublin in Ireland. Prior to co-founding Severalnines, Johan was Principal Consultant and lead of the MySQL Clustering & High Availability consulting group at MySQL / Sun Microsystems / Oracle, where he designed and implemented large-scale MySQL systems for key customers. Johan is a regular speaker at MySQL User Conferences as well as other high profile community gatherings with popular talks and tutorials around architecting and tuning MySQL Clusters.

Join us for this live webinar, where we’ll be discussing and demonstrating how to best proceed when planning to go into production with Galera Cluster.

We look forward to “seeing” you there and to insightful discussions!

If you have any questions or would like a personalised live demo, please do contact us.


9 DevOps Tips for going in production with MySQL / MariaDB Galera Cluster: webinar replay

$
0
0

Many thanks to everyone who participated in this week’s webinar on ‘9 DevOps Tips for going in production with MySQL / MariaDB Galera Cluster’.

The replay and slides are now available to watch and read online:

Watch the replayRead the slides

Galera Cluster for MySQL / MariaDB is easy to deploy, but how does it behave under real workload, scale, and during long term operation?

This is where monitoring, managing schema changes and pushing them in production, performance optimizations, configurations, version upgrades and performing backups come in.

During this webinar, our CTO Johan Andersson walked us through his tips & tricks on important aspects to consider before going live with Galera Cluster.

Agenda

  • 101 Sanity Check
  • Operating System
  • Backup Strategies
  • Replication & Sync
  • Query Performance
  • Schema Changes
  • Security / Encryption
  • Reporting
  • Managing from disaster

Speaker

Johan Andersson, CTO, Severalnines. Johan's technical background and interest are in high performance computing as demonstrated by the work he did on main-memory clustered databases at Ericsson as well as his research on parallel Java Virtual Machines at Trinity College Dublin in Ireland. Prior to co-founding Severalnines, Johan was Principal Consultant and lead of the MySQL Clustering & High Availability consulting group at MySQL / Sun Microsystems / Oracle, where he designed and implemented large-scale MySQL systems for key customers. Johan is a regular speaker at MySQL User Conferences as well as other high profile community gatherings with popular talks and tutorials around architecting and tuning MySQL Clusters.

If you have any questions or would like a personalised live demo, please do contact us.

Schema changes in Galera cluster for MySQL and MariaDB - how to avoid RSU locks

$
0
0

Working as MySQL DBA, you will often have to deal with schema changes. Changes to production databases are not popular among DBAs, but they are necessary when applications add new requirements on the databases. If you manage a Galera Cluster, this is even more challenging than usual - the default method of doing schema changes (Total Order Isolation) locks the whole cluster for the duration of the alter. There are two more ways to go, though - online schema change and Rolling Schema Upgrade.

A popular method of performing schema changes, using pt-online-schema-change, has its own limitations. It can be tricky if your workload consists of long running transactions, or the workload is highly concurrent and the tool may not be able to acquire metadata locks needed to create triggers. Triggers themselves can become a hard stop if you already have triggers in the table you need to alter (unless you use Galera Cluster based on MySQL 5.7). Foreign keys may also become a serious issue to deal with. You can find more data on those limitations in this Become a MySQL DBA blog post . New alternatives to pt-online-schema-change arrived recently - gh-ost created by GitHub, but it’s still a new tool and unless you evaluated it already, you may have to stick to pt-online-schema-change for time being.

This leaves Rolling Schema Upgrade as the only feasible method to execute schema changes where pt-online-schema-change failed or is not feasible to use. Theoretically speaking, it is a non-blocking operation - you run:

SET SESSION wsrep_OSU_method=RSU;

And the rest should happen automatically once you start the DDL - the node should be desynced and alter should not impact the rest of the cluster.

Let’s check how it behaves in real life, in two scenarios. First, we have a single connection to the Galera cluster. We don’t scale out reads, we just use Galera as a way to improve availability of our application. We will simulate it by running a sysbench workload on one of the Galera cluster nodes. We are also going to execute RSU on this node. A screenshot with result of this operation can be found below.

On the bottom right window you can see the output of sysbench - our application. On the top window there’s a SHOW PROCESSLIST output at the time the alter was running. As you can see, our application stalled for couple of seconds - for the duration of the alter command (visible in bottom left window). Graphs in the ClusterControl show the stalled queries in detail:

You may say, and rightly so, that this is expected - if you write to a node where schema change is performed, those writes have to wait.

What about if we use some sort of round-robin routing of connections? This can be done in the application (just define a pool of hosts to connect to), it can be done at the connector level. It also can be done using a proxy. Results are in the screenshot below.

As you can see, here we also have locked threads which were routed to the host where RSU was in progress. The rest of threads worked ok but some of the connections stalled for a duration of the alter. Please take a closer look at the length of an alter (11.78s) and the maximum response time (12.63s). Some of the users experienced significant performance degradation.

One question you may want to ask is - starting ALTER in RSU desyncs the Galera node. Proxies like ProxySQL, MaxScale or HAProxy (when used in connection with clustercheck script) should detect this behavior and redirect the traffic off the desynced host. Why is it locking commits? Unfortunately, there’s a high probability that some transactions will be in progress and those will get locked after the ALTER starts.

How to avoid the problem? You need to use a proxy. It’s not enough on it’s own, as we have proven just before. But as long as your proxy removes desynced hosts out of rotation, you can easily add this step to the RSU process and make sure a node is desynced and not accepting any form of traffic before you actually start your DDL.

mysql> SET GLOBAL wsrep_desync=1; SELECT SLEEP(20); ALTER TABLE sbtest.sbtest3 DROP INDEX idx_pad; ALTER TABLE sbtest.sbtest3 ADD KEY idx_pad (pad); SET GLOBAL wsrep_desync=0;

This should work with all proxies deployed through ClusterControl - HAProxy and MaxScale. ProxySQL will also be able to handle RSU executed in that way correctly.

Another method, which can be used for HAProxy, would be to disable a backend node by setting it to maintenance state. You can do this from ClusterControl:

Make sure you ticked the correct node, confirm that’s what you want to do, and in a couple of minutes you should be good to start RSU on that node. The host will be highlighted in brown:

It’s still better to be on the safe side and verify using SHOW PROCESLIST (also available in the ClusterControl -> Query Monitor -> Running Queries) that indeed no traffic is hitting this node. Once you are done running DML’s, you can enable the backend node again in the HAProxy tab in ClusterControl and you’ll be able to route traffic also to this node.

As you can see, even with load balancers used, running RSU may seriously impact performance and availability of your database. Most likely it’ll affect just a small subset of users (a few percent of connections), but it’s still not something we’d like to see. Using properly configured proxies (like those deployed by ClusterControl) and ensuring you first desync the node and then execute RSU, will be enough to avoid this type of problem.

High Availability on a Shoestring Budget - Deploying a Minimal Two Node MySQL Galera Cluster

$
0
0

We regularly get questions about how to set up a Galera cluster with just 2 nodes. The documentation clearly states you should have at least 3 Galera nodes to avoid network partitioning. But there are some valid reasons for considering a 2 node deployment, e.g., if you want achieve database high availability but have limited budget to spend on a third database node. Or perhaps you are running Galera in a development/sandbox environment and prefer a minimal setup.

Galera implements a quorum-based algorithm to select a primary component through which it enforces consistency. The primary component needs to have a majority of votes, so in a 2 node system, there would be no majority resulting in split brain. Fortunately, it is possible to add a garbd (Galera Arbitrator Daemon), which is a lightweight stateless daemon that can act as the odd node. Arbitrator failure does not affect the cluster operations and a new instance can be reattached to the cluster at any time. There can be several arbitrators in the cluster.

ClusterControl has support for deploying garbd on non-database hosts.

Normally a Galera cluster needs at least three hosts to be fully functional, however at deploy time two nodes would suffice to create a primary component. Here are the steps:

  1. Deploy a Galera cluster of two nodes,
  2. After the cluster has been deployed by ClusterControl, add garbd on the ClusterControl node.

You should end up with the below setup:

Deploy the Galera Cluster

Go to the ClusterControl deploy wizard to deploy the cluster.

Even though ClusterControl warns you a Galera cluster needs an odd number of nodes, only add two nodes to the cluster.

Deploying a Galera cluster will trigger a ClusterControl job which can be monitored at the Jobs page.

Install Garbd

Once deployment is complete, install garbd on the ClusterControl host. It will be under the Manage -> Load Balancer:

Installing garbd will trigger a ClusterControl job which can be monitored at the Jobs page. Once completed, you can verify garbd is running with a green tick icon at the top bar:

That’s it. Our minimal two-node Galera cluster is now ready!

Deploy an asynchronous slave to Galera Cluster for MySQL - The Easy Way

$
0
0

Due to its synchronous nature, Galera performance can be limited by the slowest node in the cluster. So running heavy reporting queries or making frequent backups on one node, or putting a node across a slow WAN link to a remote data center might indirectly affect cluster performance. Combining Galera and asynchronous MySQL replication in the same setup, aka Hybrid Replication, can help. A slave is loosely coupled to the cluster, and will be able to handle a heavy load without affecting the performance of the cluster. The slave can also be a live copy of the Galera cluster for disaster recovery purposes.

We had explained the different steps to set this up in a previous post. With ClusterControl 1.2.9 though, this can be automated via the web interface. A slave can be bootstrapped with a Percona XtraBackup stream from a chosen Galera master node. In case of master failure, the slave can be failed over to replicate from another Galera node. Note that master failover is available if you are using Percona XtraDB Cluster or the Codership build of Galera Cluster with GTID. If you are using MariaDB, ClusterControl supports adding a slave but not performing master failover. 

 

Preparing the Master (Galera Cluster)

MySQL replication slave requires at least a master with GTID enabled on the Galera nodes. However, we would recommend users to configure all Galera nodes as master for better failover. GTID is required as it is used to do master failover. If you are running on MySQL 5.5, you might need to upgrade to MySQL 5.6

The following must be true for the masters:

  • At least one master among the Galera nodes
  • MySQL GTID must be enabled
  • log_slave_updates must be enabled
  • Master’s MySQL port is accessible by ClusterControl and slaves

To configure a Galera node as master, change the MySQL configuration file for that node as per below:

server_id=<must be unique across all mysql servers participating in replication>
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
gtid_mode=ON
enforce_gtid_consistency=1

 

Preparing the Slave

For the slave, you would need a separate host or VM, with or without MySQL installed. If you do not have a MySQL installed, and choose ClusterControl to install the MySQL on the slave, ClusterControl will perform the necessary actions to prepare the slave, for example, configure root password (based on monitored_mysql_root_password), create slave user (based on repl_user, repl_password), configure MySQL, start the server and also start replication. The MySQL package used will be based on the Galera vendor used, for example, if you are running Percona XtraDB Cluster, ClusterControl will prepare the slave using Percona Server.

In short, we must perform following actions beforehand:

  • The slave node must be accessible using passwordless SSH from the ClusterControl server
  • MySQL port (default 3306) and netcat port 9999 on the slave are open for connections.
  • You must configure the following options in the ClusterControl configuration file for the respective cluster ID under /etc/cmon.cnf or /etc/cmon.d/cmon_<cluster ID>.cnf:
    • repl_user=<the replication user>
    • repl_password=<password for replication user>
    • monitored_mysql_root_password=<the mysql root password of all nodes including slave>
  • The slave configuration template file must be configured beforehand, and must have at least the following variables defined in the MySQL configuration template:
    • server_id
    • basedir
    • datadir

To prepare the MySQL configuration file for the slave, go to ClusterControl > Manage > Configurations > Template Configuration files > edit my.cnf.slave and add the following lines:

[mysqld]
bind-address=0.0.0.0
gtid_mode=ON
log_bin=binlog
log_slave_updates=1
enforce_gtid_consistency=ON
expire_logs_days=7
server_id=1001
binlog_format=ROW
slave_net_timeout=60
basedir=/usr
datadir=/var/lib/mysql

 

Attaching a Slave via ClusterControl

Let’s now add our slave using ClusterControl. The architecture looks like this:

Our example cluster is running MySQL Galera Cluster (Codership). The same steps apply for Percona XtraDB Cluster, although MariaDB 10 has minor differences in step #1 and #6.

1. Configure Galera nodes as master. Go to ClusterControl > Manage > Configurations, and click Edit/View on each configuration files and append the following lines under mysqld directive:
galera1:

server_id=101
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

galera2:

server_id=102
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

galera3:

server_id=103
binlog_format=ROW
log_slave_updates=1
log_bin=binlog
expire_logs_days=7
gtid_mode=ON
enforce_gtid_consistency=1

2. Perform a rolling restart from ClusterControl > Manage > Upgrades > Rolling Restart. Optionally, you can restart one node at a time under ClusterControl > Nodes > select the corresponding node > Shutdown > Execute, and then start it again.

3. You should see that ClusterControl detects the newly configured master nodes, as per the screenshot below:

4. On the ClusterControl node, setup passwordless SSH to the slave node:

$ ssh-copy-id -i ~/.ssh/id_rsa 192.168.50.111

5. Then, ensure the following lines exist in the corresponding cmon.cnf or cmon_<cluster ID>.cnf:

repl_user=slave
repl_password=slavepassword123
monitored_mysql_root_password=myr00tP4ssword

Restart CMON daemon to apply the changes:

$ service cmon restart

6. Go to ClusterControl > Manage > Configurations > Create New Template or Edit/View existing template, and the add following lines:

7. Now, we are ready to add the slave. Go to ClusterControl > Cluster Actions > Add Replication Slave. Choose a master and the configuration file as per the example below:

Click on Proceed. A job will be triggered and you can monitor the progress at ClusterControl > Logs > Jobs. Once the process is complete, the slave will show up in your Overview page as highlighted in the following screenshot:

You would notice there are 4 green tick icons for master. This is because we have configured our slave to produce a binlog, which is required for GTID. Thus, the node is capable to become a master for another slave.

 

Failover and Recovery

To perform failover in case the designated master goes down, just go to Nodes > select the slave node > Failover Replication Slave > Execute and choose a new master similar to the screenshot below:

You can also stage the slave with data from the master by going to Nodes > select the slave node > Stage Replication Slave > Execute to re-initialize the slave:

This process will stop the slave instance, remove the datadir content, stream a backup from the master using Percona Xtrabackup, and then restart the slave. This can take some time depending on your database size, network and IO.

That’s it folks!

Automation & Management of Galera Clusters for MySQL, MariaDB & Percona XtraDB: New 1-Day Online Training Course

$
0
0

Galera Cluster For System Administrators, DBAs And DevOps

Galera Cluster for MySQL, MariaDB and Percona XtraDB involves more effort and resource to administer than standalone systems. If you would like to learn how to best deploy, monitor, manage and scale your database cluster(s), then this new online training course is for you!

The course is designed for system administrators & database administrators looking to gain more in depth expertise in the automation and management of Galera Clusters.

What: A one-day, instructor-led, Galera Cluster management training course

When: The first training course will take place on June 12th 2015 - European time zone
Please register your interest also if you’re outside of that time zone, as we will be scheduling further dates/courses

Where: In a virtual classroom as well as a virtual lab for hands-on lab exercises

How:Reserve your seat online and we will contact you back with all the relevant details

Who: The training is delivered by Severalnines& BOS-it GmbH 

  • You will learn about:
  • Galera Cluster, system architecture & multi-data centre setups
  • Automated deployment & node / cluster recovery
  • How to best migrate data into Galera Cluster
  • Monitoring & troubleshooting basics
  • Load balancing and cluster management techniques

GaleraCluster_logo.png

This course is all about hands-on lab exercises! Learn from the experts without having to leave your home or office!

High availability cluster configurations tend to be complex, but once designed, they tend to be duplicated many times with minimal variation. Automation can be applied to provisioning, upgrading, patching and scaling. DBAs and Sysadmins can then focus on more critical tasks, such as performance tuning, query design, data modeling or providing architectural advice to application developers. A well managed system can mitigate operational risk, that can result in significant savings and reduced downtime. 

To learn how to best deploy, monitor, manage and scale Galera Cluster, click here for more information and to sign up. 

The number of seats is limited, so make sure you register soon!

severalnines_logo.jpgScreen Shot 2015-04-10 at 10.32.24.png

 

Or why not talk to us directly if you’re at Percona Live: MySQL Conference & Expo 2015 next week?

We’ll be at booth number 417, the one with the balloons and the S9s t-shirts, so come and grab a t-shirt as stocks last! And of course, we’ll be happy to talk about database clustering, ClusterControl and our new training course …

Note that we’ll be giving away one seat of our new training course (to the value of €750) at the conference as part of this year’s passport programme; so make sure to get your passport stamped at our booth!

Screen Shot 2015-04-10 at 11.35.56.png

 

Viewing all 210 articles
Browse latest View live