Dump doesn’t work. If you have huge tables with billions of rows your backup process could become a nightmare, mainly if you need to restore it.
This method aims to perform a full backup of a large MySQL database, as Zabbix (any version), focusing on a fast recovery from disaster. So I chose XtraBackup for this task, a backup tool from Percona, which works using the hotcopy mode.
First, you need to download and install XtraBackup:
http://www.percona.com/downloads/XtraBackup/LATEST/
XtraBackup offers a lot of parameters, so this script is intented to be the simplest possible.
Make sure that you’re using InnoDB for history tables at least. In my case, I have a Zabbix Database with 300GB data that takes about 3 hours to do all backup.
Create script /var/lib/xtrabackup/mysql-fullbackup.sh:
https://github.com/xsbr/zabbixzone/blob/master/mysql-fullbackup.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
#!/bin/bash # # mysql-fullbackup.sh # v0.1 - 20120921 # # Full Backup for Zabbix w/MySQL # # Author: Ricardo Santos (rsantos at gmail.com) # http://zabbixzone.com # MYSQLUSER="YOURUSER" MYSQLPASS="YOURPASSWORD" MYSQLCNF="/etc/my.cnf" MYSQLDIR="/var/lib/mysql" BASEDIR="/var/lib/xtrabackup" BKPDIR="${BASEDIR}/lastbackup" BKPTEMPDIR="${BASEDIR}/tempbackup" # Memory used in stage 2 USEMEMORY="1GB" # create basedir mkdir -p ${BASEDIR} # remove temporary dir if [ -d "${BKPTEMPDIR}" ]; then rm -rf ${BKPTEMPDIR} fi # do backup - stage 1 innobackupex --defaults-file=${MYSQLCNF} --user=${MYSQLUSER} --no-timestamp --password=${MYSQLPASS} ${BKPTEMPDIR} # do backup - stage 2 (prepare backup for restore) innobackupex --apply-log --use-memory=${USEMEMORY} ${BKPTEMPDIR} # backup my.cnf cp -pf ${MYSQLCNF} ${BKPTEMPDIR}/my.cnf # keep only the lastbackup if [ -d "${BKPDIR}" ]; then if [ -d "${BKPDIR}.old" ]; then rm -rf ${BKPDIR}.old fi rm -rf ${BKPDIR} fi chown -R mysql: ${BKPTEMPDIR} mv ${BKPTEMPDIR} ${BKPDIR} |
Adjust the permissions:
1 |
chmod +x /var/lib/xtrabackup/mysql-fullbackup.sh |
Configure your crontab to backup every day at 04:15am:
1 |
15 04 * * * root /var/lib/xtrabackup/mysql-fullbackup.sh >/var/lib/xtrabackup/lastrun.log 2>&1 |
So if you need a restore, it’s very simple:
1 2 3 4 5 6 7 8 9 10 |
# stop MySQL service mysql stop # move backuped files cd /var/lib mv mysql mysqlcrashed mv xtrabackup/lastbackup mysql # start MySQL service mysql start |
Thank you Ricardo.
Actually, mysqldump works perfectly.
Worked around zabbix DB backup task a year ago, and tried to solve such problem as it’s low perfomance. In the end, I dropped all these dirty methods, and hotcopy too.
On test server I had
single SSD as storage (lately that company bought couple of them for RAID10, but I left them), sata3 HDD as backup storage, *default* mysqldump with pair of keys (but they didn’t matter much).
I didn’t stop mysql process – it did not cause any problems – all backups were able to works, and monitoring didn’t stop while backup process. And it worked on AMD Phenom CPU, with no problems.
What a pity, I don’t remember a size of DB.
(Sorry for my bad English)
Hello,
I’m trying to implement a full backup one table each time based on mysqldump (mysqldump -R –opt –extended-insert=FALSE) and during the backups Zabbix server stops collecting data and I get alarms. How can I work around this?
Thanks in advance