This script is a simple way to backup all configuration tables (eg. templates, hostgroups, hosts, triggers…) without the history data.
If you need do a full backup (history included) I recommend you this post: http://zabbixzone.com/zabbix/backuping-full-database/
As the result is very small (around 30 MB), is possible run this backup many times per day.
https://github.com/xsbr/zabbixzone/blob/master/zabbix-mysql-backupconf.sh
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
#!/bin/bash # # zabbix-mysql-backupconf.sh # v0.2 - 20111105 # # Configuration Backup for Zabbix 1.8 w/MySQL # # Author: Ricardo Santos (rsantos at gmail.com) # http://zabbixzone.com # # Thanks for suggestions from: # - Oleksiy Zagorskyi (zalex) # - Petr Jendrejovsky # # mysql config DBHOST="localhost" DBNAME="zabbix" DBUSER="zabbix" DBPASS="YOURMYSQLPASSWORDHERE" # some tools MYSQLDUMP="`which mysqldump`" GZIP="`which gzip`" DATEBIN="`which date`" MKDIRBIN="`which mkdir`" # target path MAINDIR="/var/lib/zabbix/backupconf" DUMPDIR="${MAINDIR}/`${DATEBIN} +%Y%m%d%H%M`" ${MKDIRBIN} -p ${DUMPDIR} # configuration tables CONFTABLES=( actions applications autoreg_host conditions config dchecks dhosts \ drules dservices escalations expressions functions globalmacro graph_theme \ graphs graphs_items groups help_items hostmacro hosts hosts_groups \ hosts_profiles hosts_profiles_ext hosts_templates housekeeper httpstep \ httpstepitem httptest httptestitem ids images items items_applications \ maintenances maintenances_groups maintenances_hosts maintenances_windows \ mappings media media_type node_cksum nodes opconditions operations \ opmediatypes profiles proxy_autoreg_host proxy_dhistory proxy_history regexps \ rights screens screens_items scripts service_alarms services services_links \ services_times sessions slides slideshows sysmaps sysmaps_elements \ sysmaps_link_triggers sysmaps_links timeperiods trigger_depends triggers \ user_history users users_groups usrgrp valuemaps ) # tables with large data DATATABLES=( acknowledges alerts auditlog_details auditlog events \ history history_log history_str history_str_sync history_sync history_text \ history_uint history_uint_sync trends trends_uint ) # CONFTABLES for table in ${CONFTABLES[*]}; do DUMPFILE="${DUMPDIR}/${table}.sql" echo "Backuping table ${table}" ${MYSQLDUMP} -R --opt --extended-insert=FALSE \ -h ${DBHOST} -u ${DBUSER} -p${DBPASS} ${DBNAME} --tables ${table} >${DUMPFILE} ${GZIP} -f ${DUMPFILE} done # DATATABLES for table in ${DATATABLES[*]}; do DUMPFILE="${DUMPDIR}/${table}.sql" echo "Backuping schema table ${table}" ${MYSQLDUMP} -R --opt --no-data \ -h ${DBHOST} -u ${DBUSER} -p${DBPASS} ${DBNAME} --tables ${table} >${DUMPFILE} ${GZIP} -f ${DUMPFILE} done echo echo "Backup Completed - ${DUMPDIR}" |
Hi there, nice script!
Did you ever consider doing the backup with Percona Xtrabackup? It should be able to do incremental backups, quite nice software (so is Percona Server I think), see http://www.percona.com/software/percona-xtrabackup/
Hi 0siris,
Thanks for the feedback.
I’ll talk about Xtrabackup in the next post. I’m using it and is really amazing.
Great! Thank you!
Just one note – it would be better to use /bin/bash or something similar, because with simple /bin/sh it’s not working on all systems (for example Ubuntu 10.04).
I vote for this change.
Ricardo,
The table “actions” is not data table, it’s configuration table !!!
>>>>> I think you have to fix the script. <<<<<
Would be better try to find the binaries instead of "hardcode" their location. IMO.
Would be better to use option "–no-data" instead of "-d" because it's more clear and well known. IMO.
—
zalex
Zalex,
thanks for your suggestion.
I’ll put this script and others in a public repository like github or lauchpad to become more easy for people contributing.
Now in GitHub – https://github.com/xsbr/zabbixzone/blob/master/zabbix-mysql-backupconf.sh
I used this script and it works very well.
Thanks!
This approach has a problem:
mysqldump does a read lock on tables, so dumping large tables is neither impossible without stoping zabbix services..
I configured my mysql server with master-server replication, so this way, I can stop anyway the slave, dump the entire db, and start again the slave without stoping zabbix services.
Anyways, thanks for sharing
Hi Ricardo!
This is a nice script. Do you know if someone tried to use it with Oracle11GR2?
Thanks for sharing.
Araceli
Hi Araceli,
No, this script is only for MySQL.
But the simple way to do it is backup all schemas and the tables defined in “CONFTABLES” variable.
Great script!
I think changing your commands to something like
MYSQLDUMP=”
which mysqldump
”doesn’t make much sense, as the “which” command just searches the $PATH – what is also done when you directly call the command without any path.
So you can either just write
MYSQLDUMP=”mysqldump” or completely omit the variable and directly use the command (without path).
Couple of comments:
First, because all of the zabbix tables are InnoDb, you should use the following options to mysqldump:
–opt –single-transaction –skip-lock-tables
instead of just –opt
Second, you dump each table to a different file. This makes it a pain to restore an entire system. Why not have an option for a single file or not? The following code incorporates these two ideas:
SINGLEFILE=yes
if [ “$SINGLEFILE” = “yes” ]; then
DUMPFILE=”${DUMPDIR}/zbx-conf-bkup-
${DATEBIN} +%Y%m%d%H%M
.sql”>$DUMPFILE
fi
# CONFTABLES
for table in ${CONFTABLES[*]}; do
[ “$SINGLEFILE” != “yes” ] && DUMPFILE=”${DUMPDIR}/${table}.sql”
echo “Backuping table ${table}”
${MYSQLDUMP} -R –opt –single-transaction –skip-lock-tables –extended-insert=FALSE \
-h ${DBHOST} -u ${DBUSER} -p${DBPASS} ${DBNAME} –tables ${table} >>${DUMPFILE}
[ “$SINGLEFILE” != “yes” ] && ${GZIP} -f ${DUMPFILE}
done
# DATATABLES
for table in ${DATATABLES[*]}; do
[ “$SINGLEFILE” != “yes” ] && DUMPFILE=”${DUMPDIR}/${table}.sql”
echo “Backuping schema table ${table}”
${MYSQLDUMP} -R –opt –single-transaction –skip-lock-tables –no-data \
-h ${DBHOST} -u ${DBUSER} -p${DBPASS} ${DBNAME} –tables ${table} >>${DUMPFILE}
[ “$SINGLEFILE” != “yes” ] && ${GZIP} -f ${DUMPFILE}
done
[ “$SINGLEFILE” = “yes” ] && ${GZIP} -f ${DUMPFILE}
I created a new version for Zabbix 1.9.x / 2.0.0 including Jonathan Bayers suggestions:
https://github.com/maxhq/zabbixzone/blob/patch-1/zabbix-mysql-backupconf.sh
Very nice Script.
Thanks
—
Anacondaweb.com
Hello,
I’m a new to this and want to backup the complete zabbix configuration, I think this script is doing this. but…. after running this script I have a file, how to restore this file into mysql?
Thanks
Hi!
After running this script I have too many file (.sql.gz), how to restore all file into mysql?
Thanks!
Hi there!
Your script worked perfectly, thanks for all the work and updates. But after the schema change that comes with the migration to Zabbix 2.0 some changes are required, some tables are gone, some were added.
Are you already working on this, can I help?
Thanks!
Tobias
Ehh, nvmd… just read the comments properly – disregard my comments 😉
Hy,
how is it possible to restore all this files again?
br
Daniel
Daniel,
you can restore all tables, but it will only restore the configuration, not history data.
Concatenate all files into one:
zcat *.sql.gz >alltables.sql
Restore it:
mysql -u root -p zabbix
Hi, Tested this out and everything appears to restore ok, but then get DB errors when trying to do anything – new host, delete, clone etc. Any ideas please??
Errors e.g. Error in query [DELETE FROM hosts WHERE (hostid IN (‘10131’)) ] [Lock wait timeout exceeded; try restarting transaction]
SQL statement execution has failed “DELETE FROM hosts WHERE (hostid IN (‘10131’)) “
There is some process locking this row in hosts table. Try to see ‘SHOW FULL PROCESSLIST’