GIT – It appears this configuration wasn’t working with version 1.3.20 due to the “option nolinger” feature. I have removed it from the configuration and can confirm it works well with v.1.3.15 to v.1.3.20. Because of this, you’ll also notice a significant increase in TIME_WAIT sessions, as well asip_conntrack_count increasing from ~150 to ~925.

This post summarizes my reflections on failover, redundancy, and ultimately scaling MySQL databases using load-balancing software known as HAProxy.

haproxy-01

At my current employer, we have been using HAProxy to build very simple server clusters to help clients scale their databases. It works for most people assuming their application:

  • Has an acceptable ratio of reads/writes (i.e: 100:1)
  • Can separate reads and writes at the application level

If your read/write ratio is lower, that’s when you need to look into different scaling solutions such as sharding.

I’ve designed a slightly more complex HAProxy configuration file which load-balances requests to databases. It detects failures such as broken replication and offline servers, and adjusts the availability of servers accordingly.

Each database server is running an xinetd daemon. Port 9201 is used to monitor replication and port 9200 is used to monitor mysql status. These ports are monitored by HAProxy as you will see in the configuration file below.

HAProxy backend to monitor replication

backend db01_replication
	mode tcp
	balance roundrobin
	option tcpka
	option httpchk
	server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1

HAProxy backend to monitor mysql status

backend db01_status
	mode tcp
	balance roundrobin
	option tcpka
	option httpchk
	server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2

I modified the mysqlchk_status.sh script found at SysBible with my own.

The mysqlchk_replication.sh script is similar to the one above, except it checks a few other variables such asSlave_IO_RunningSlave_SQL_Running and Seconds Behind Master. Success will always return a ‘200 OK’ and failures will always return a ‘503 Service Unavailable’.

servers-01

My test setup

  • 2 HAProxy load-balancers in Active-Passive mode (VRRP using Keepalived)
  • 2 MySQL database servers with Master-Master replication in Active-Passive mode
  • 3 MySQL database servers with slave replication (read-only)

Failure scenarios

Based on a small set of failure scenarios, we’re able to determine how the load balancers should direct traffic. We obviously don’t want read requests from a database server who’s not replicating its master. We also don’t want to send writes to a server who’s offline. The examples below describe how HAProxy will react in those scenarios.

1. Replication breaks, lags, or stops working on DB02

haproxy-02

servers-02

  • DB01 becomes the master database server.
  • HAProxy stops sending requests to DB02 and DB05 (its slave).
  • Despite this, DB01 and DB05 are still able to receive replicated data from DB02.

2. Replication breaks, lags, or stops working on DB01

haproxy-03

servers-03

  • DB02 becomes the master database server.
  • HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
  • Despite this, DB02, DB03 and DB04 are still able to receive replicated data from DB01.

3. Replication breaks, lags, or stops working on DB01 & DB02

haproxy-04

servers-04

  • There is no writable master database server. Service is severely degraded and action should be taken to bring one master server back into replication.
  • This is a split-brain problem. Both servers are online, but they aren’t replicating each other.
  • HAProxy only sends read requests to DB01 and DB02
  • HAProxy stops sending requests to DB03, DB04 and DB05 (the slaves).
  • Despite this, DB03 and DB04 are still able to receive replicated data from DB01.
  • Despite this, DB05 is still able to receive replicated data from DB02.

4. DB02 is offline, due to a server crash or something similar

haproxy-05

servers-05

  • DB01 becomes the master database server.
  • HAProxy stops sending requests to DB02 and DB05 (its slave).
  • DB05 can’t receive replicated data from DB02.
  • DB01 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.

5. DB01 is offline, due to a server crash or something similar

haproxy-06

servers-06

  • DB02 becomes the master database server.
  • HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
  • DB03 and DB04 can’t receive replicated data from DB01.
  • DB02 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.

6. DB01 and DB02 are offline, due to a server crash or something similar

haproxy-07

servers-07

  • There is no master database server.
  • HAProxy stops sending requests to all DB servers.
  • Call your sysadmin because your website is probably down.

Download

WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.

Here are the scripts and config files, or scroll down to view the code:

The xinetd config file

# /etc/xinetd.d/mysqlchk
#
service mysqlchk_write
{
flags = REUSE
socket_type = stream
port = 9200
wait = no
user = nobody
server = /opt/mysqlchk_status.sh
log_on_failure += USERID
disable = no
only_from = 172.16.0.0/24 # recommended to put the IPs that need
# to connect exclusively (security purposes)
}

service mysqlchk_read
{
flags = REUSE
socket_type = stream
port = 9201
wait = no
user = nobody
server = /opt/mysqlchk_replication.sh
log_on_failure += USERID
disable = no
only_from = 172.16.0.0/24 # recommended to put the IPs that need
# to connect exclusively (security purposes)
}

The mysqlchk_status script

#!/bin/bash

# This script checks if a mysql server is healthy running on localhost. It will

# return:

# “HTTP/1.x 200 OK\r” (if mysql is running smoothly)

# – OR –

# “HTTP/1.x 500 Internal Server Error\r” (else)

# The purpose of this script is make haproxy capable of monitoring mysql properly

# Author: Unai Rodriguez

# It is recommended that a low-privileged-mysql user is created to be used by

# this script. Something like this:

# mysql> GRANT SELECT on mysql.* TO ‘mysqlchkusr’@’localhost’ \

# -> IDENTIFIED BY ‘257retfg2uysg218’ WITH GRANT OPTION;

# mysql> flush privileges;

# Script modified by Alex Williams – August 4, 2009

# – removed the need to write to a tmp file, instead store results in memory

MYSQL_HOST=”172.16.0.60″

MYSQL_PORT=”3306″

MYSQL_USERNAME=”replication_user”

MYSQL_PASSWORD=”replication_pass”

# We perform a simple query that should return a few results :-p

ERROR_MSG=`/usr/bin/mysql –host=$MYSQL_HOST –port=$MYSQL_PORT –user=$MYSQL_USERNAME –password=$MYSQL_PASSWORD -e “show databases;” 2>/dev/null`

# Check the output. If it is not empty then everything is fine and we return

# something. Else, we just do not return anything.

if [ “$ERROR_MSG” != “” ]

then

# mysql is fine, return http 200

/bin/echo -e “HTTP/1.1 200 OK\r\n”

/bin/echo -e “Content-Type: Content-Type: text/plain\r\n”

/bin/echo -e “\r\n”

/bin/echo -e “MySQL is running.\r\n”

/bin/echo -e “\r\n”

else

# mysql is fine, return http 503

/bin/echo -e “HTTP/1.1 503 Service Unavailable\r\n”

/bin/echo -e “Content-Type: Content-Type: text/plain\r\n”

/bin/echo -e “\r\n”

/bin/echo -e “MySQL is *down*.\r\n”

/bin/echo -e “\r\n”

fi

The HAProxy config file

WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.

 

#!/bin/bash

# This script checks if a mysql server is healthy running on localhost. It will

# return:

# “HTTP/1.x 200 OK\r” (if mysql is running smoothly)

# – OR –

# “HTTP/1.x 500 Internal Server Error\r” (else)

# The purpose of this script is make haproxy capable of monitoring mysql properly

# Author: Unai Rodriguez

# It is recommended that a low-privileged-mysql user is created to be used by

# this script. Something like this:

# mysql> GRANT SELECT on mysql.* TO ‘mysqlchkusr’@’localhost’ \

# -> IDENTIFIED BY ‘257retfg2uysg218’ WITH GRANT OPTION;

# mysql> flush privileges;

# Script modified by Alex Williams – August 4, 2009

# – removed the need to write to a tmp file, instead store results in memory

MYSQL_HOST=”172.16.0.60″

MYSQL_PORT=”3306″

MYSQL_USERNAME=”replication_user”

MYSQL_PASSWORD=”replication_pass”

# We perform a simple query that should return a few results :-p

ERROR_MSG=`/usr/bin/mysql –host=$MYSQL_HOST –port=$MYSQL_PORT –user=$MYSQL_USERNAME –password=$MYSQL_PASSWORD -e “show databases;” 2>/dev/null`

# Check the output. If it is not empty then everything is fine and we return

# something. Else, we just do not return anything.

if [ “$ERROR_MSG” != “” ]

then

# mysql is fine, return http 200

/bin/echo -e “HTTP/1.1 200 OK\r\n”

/bin/echo -e “Content-Type: Content-Type: text/plain\r\n”

/bin/echo -e “\r\n”

/bin/echo -e “MySQL is running.\r\n”

/bin/echo -e “\r\n”

else

# mysql is fine, return http 503

/bin/echo -e “HTTP/1.1 503 Service Unavailable\r\n”

/bin/echo -e “Content-Type: Content-Type: text/plain\r\n”

/bin/echo -e “\r\n”

/bin/echo -e “MySQL is *down*.\r\n”

/bin/echo -e “\r\n”

fi

# HAProxy configuration – haproxy-db.cfg

##

## FRONTEND ##

##

# Load-balanced IPs for DB writes and reads

#

frontend db_write

bind 172.16.0.50:3306

default_backend cluster_db_write

frontend db_read

bind 172.16.0.51:3306

default_backend cluster_db_read

# Monitor DB server availability

frontend monitor_db01

# set db01_backup to ‘up’ or ‘down’

bind 127.0.0.1:9301

mode http

#option nolinger

acl no_repl_db01 nbsrv(db01_replication) eq 0

acl no_repl_db02 nbsrv(db02_replication) eq 0

acl no_db01 nbsrv(db01_status) eq 0

acl no_db02 nbsrv(db02_status) eq 0

monitor-uri /dbs

monitor fail unless no_repl_db01 no_repl_db02 no_db02

monitor fail if no_db01 no_db02

frontend monitor_db02

# set db02_backup to ‘up’ or ‘down’

bind 127.0.0.1:9302

mode http

#option nolinger

acl no_repl_db01 nbsrv(db01_replication) eq 0

acl no_repl_db02 nbsrv(db02_replication) eq 0

acl no_db01 nbsrv(db01_status) eq 0

acl no_db02 nbsrv(db02_status) eq 0

monitor-uri /dbs

monitor fail unless no_repl_db01 no_repl_db02 no_db01

monitor fail if no_db01 no_db02

frontend monitor_db03

# set db03 read-only slave to ‘down’

bind 127.0.0.1:9303

mode http

#option nolinger

acl no_repl_db03 nbsrv(db03_replication) eq 0

acl no_repl_db01 nbsrv(db01_replication) eq 0

acl db02 nbsrv(db02_status) eq 1

monitor-uri /dbs

monitor fail if no_repl_db03

monitor fail if no_repl_db01 db02

frontend monitor_db04

# set db04 read-only slave to ‘down’

bind 127.0.0.1:9304

mode http

#option nolinger

acl no_repl_db04 nbsrv(db04_replication) eq 0

acl no_repl_db01 nbsrv(db01_replication) eq 0

acl db02 nbsrv(db02_status) eq 1

monitor-uri /dbs

monitor fail if no_repl_db04

monitor fail if no_repl_db01 db02

frontend monitor_db05

# set db05 read-only slave to ‘down’

bind 127.0.0.1:9305

mode http

#option nolinger

acl no_repl_db05 nbsrv(db05_replication) eq 0

acl no_repl_db02 nbsrv(db02_replication) eq 0

acl db01 nbsrv(db01_status) eq 1

monitor-uri /dbs

monitor fail if no_repl_db05

monitor fail if no_repl_db02 db01

# Monitor for split-brain syndrome

frontend monitor_splitbrain

# set db01_splitbrain and db02_splitbrain to ‘up’

bind 127.0.0.1:9300

mode http

#option nolinger

acl no_repl01 nbsrv(db01_replication) eq 0

acl no_repl02 nbsrv(db02_replication) eq 0

acl db01 nbsrv(db01_status) eq 1

acl db02 nbsrv(db02_status) eq 1

monitor-uri /dbs

monitor fail unless no_repl01 no_repl02 db01 db02

## BACKEND ##

# Check every DB server replication status

# – perform an http check on port 9201 (replication status)

# – set to ‘down’ if response is ‘503 Service Unavailable’

# – set to ‘up’ if response is ‘200 OK’

backend db01_replication

mode tcp

balance roundrobin

option tcpka

option httpchk

server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1

backend db02_replication

mode tcp

balance roundrobin

option tcpka

option httpchk

server db02 172.16.0.61:3306 check port 9201 inter 1s rise 1 fall 1

backend db03_replication

mode tcp

balance roundrobin

option tcpka

option httpchk

server db03 172.16.0.63:3306 check port 9201 inter 1s rise 1 fall 1

backend db04_replication

mode tcp

balance roundrobin

option tcpka

option httpchk

server db04 172.16.0.64:3306 check port 9201 inter 1s rise 1 fall 1

backend db05_replication

mode tcp

balance roundrobin

option tcpka

option httpchk

server db05 172.16.0.65:3306 check port 9201 inter 1s rise 1 fall 1

# Check Master DB server mysql status

# – perform an http check on port 9200 (mysql status)

# – set to ‘down’ if response is ‘503 Service Unavailable’

# – set to ‘up’ if response is ‘200 OK’

backend db01_status

mode tcp

balance roundrobin

option tcpka

option httpchk

server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2

backend db02_status

mode tcp

balance roundrobin

option tcpka

option httpchk

server db02 172.16.0.61:3306 check port 9200 inter 1s rise 2 fall 2

# DB write cluster

# Failure scenarios:

# – replication ‘up’ on db01 & db02 = writes to db01

# – replication ‘down’ on db02 = writes to db01

# – replication ‘down’ on db01 = writes to db02

# – replication ‘down’ on db01 & db02 = go nowhere, split-brain, cluster FAIL!

# – mysql ‘down’ on db02 = writes to db01_backup

# – mysql ‘down’ on db01 = writes to db02_backup

# – mysql ‘down’ on db01 & db02 = go nowhere, cluster FAIL!

backend cluster_db_write

# – max 1 db server available at all times

# – db01 is preferred (top of list)

# – db_backups set their ‘up’ or ‘down’ based on results from monitor_dbs

mode tcp

option tcpka

balance roundrobin

option httpchk GET /dbs

server db01 172.16.0.60:3306 weight 1 check port 9201 inter 1s rise 2 fall 1

server db02 172.16.0.61:3306 weight 1 check port 9201 inter 1s rise 2 fall 1 backup

server db01_backup 172.16.0.60:3306 weight 1 check port 9301 inter 1s rise 2 fall 2 addr 127.0.0.1 backup

server db02_backup 172.16.0.61:3306 weight 1 check port 9302 inter 1s rise 2 fall 2 addr 127.0.0.1 backup

# DB read cluster

# Failure scenarios

# – replication ‘up’ on db01 & db02 = reads on db01, db02, all db_slaves

# – replication ‘down’ on db02 = reads on db01, slaves of db01

# – replication ‘down’ on db01 = reads on db02, slaves of db02

# – replication ‘down’ on db01 & db02 = reads on db01_splitbrain and db01_splitbrain only

# – mysql ‘down’ on db02 = reads on db01_backup, slaves of db01

# – mysql ‘down’ on db01 = reads on db02_backup, slaves of db02

# – mysql ‘down’ on db01 & db02 = go nowhere, cluster FAIL!

backend cluster_db_read

# – max 2 master db servers available at all times

# – max N slave db servers available at all times except during split-brain

# – dbs track ‘up’ and ‘down’ of dbs in the cluster_db_write

# – db_backups track ‘up’ and ‘down’ of db_backups in the cluster_db_write

# – db_splitbrains set their ‘up’ or ‘down’ based on results from monitor_splitbrain

mode tcp

option tcpka

balance roundrobin

option httpchk GET /dbs

server db01 172.16.0.60:3306 weight 1 track cluster_db_write/db01

server db02 172.16.0.61:3306 weight 1 track cluster_db_write/db02

server db01_backup 172.16.0.60:3306 weight 1 track cluster_db_write/db01_backup

server db02_backup 172.16.0.61:3306 weight 1 track cluster_db_write/db02_backup

server db01_splitbrain 172.16.0.60:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1

server db02_splitbrain 172.16.0.61:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1

# Scaling & redundancy options

# – db_slaves set their ‘up’ or ‘down’ based on results from monitor_dbs

# – db_slaves should take longer to rise

server db03_slave 172.16.0.63:3306 weight 1 check port 9303 inter 1s rise 5 fall 1 addr 127.0.0.1

server db04_slave 172.16.0.64:3306 weight 1 check port 9304 inter 1s rise 5 fall 1 addr 127.0.0.1

server db05_slave 172.16.0.65:3306 weight 1 check port 9305 inter 1s rise 5 fall 1 ad

Thảo luận bài viết tại forum : http://forum.gocit.vn/threads/haproxy-for-mysql-failover-and-redundancy.629/

Bài viết này nằm trong dự án  website : www.gocit.vn, forum hổ trợ forum.gocit.vn , xem thêm tại link http://www.gocit.vn/du-an-linux-toan-tap/
Print Friendly

Comments

comments

Bài viết liên quan