Most of these tips is useful for many application, but I’ll keep focus on Zabbix.
Database is the main bottleneck from Zabbix. Try to use a Dedicated Server for MySQL and make sure that server has great resources (CPU, memory and fast disks).
This is the specs for an environment with 3000 values per second:
Dell PowerEdge R610
CPU: Intel Xeon L5520 2.27GHz (16 cores)
Memory: 24GB RAM
Disks: 6x SAS 10k with RAID10 by hardware
1 |
innodb_file_per_table=1 |
By default, InnoDB creates all tables inside an unique datafile. With this option the new tables will have your own datafile. So after the change, You’ll need recreate the tables.
It opens some possibilities like put your tables in different filesystems and makes backup with more consistency.
Peter Zaitsev – http://www.mysqlperformanceblog.com/2007/11/01/innodb-performance-optimization-basics/
innodb_file_per_table – If you do not have too many tables use this option, so you will not have uncontrolled innodb main tablespace growth which you can’t reclaim.
Tristan – cPanel Staff – http://forums.cpanel.net/f43/innodb_file_per_table-converting-per-table-data-innodb-167942.html
Issue with shared InnoDB /var/lib/mysql/ibdata1 storage
InnoDB tables currently store data and indexes into a shared tablespace (/var/lib/mysql/ibdata1). Due to the shared tablespace, data corruption for one InnoDB table can result in MySQL failing to start up on the entire machine. Repairing InnoDB corruption can be extremely difficult to perform and can result in data loss for tables that were not corrupted originally during that repair process.
Some discussions about this:
http://dom.as/2009/05/21/innodb-tablespace/
http://code.openark.org/blog/mysql/reasons-to-use-innodb_file_per_table
Personally I lost all my data because the ibdata file crashed (all data inside one file).
Ref.: http://dev.mysql.com/doc/refman/5.5/en/innodb-multiple-tablespaces.html
Percona Server is a modified version from MySQL Community Edition.
Some benchmarks show us about Percona performance advantages:
http://www.percona.com/software/percona-server/benchmarks/
Housekeeper reduces the MySQL performance (see History Tables – Housekeeper). So a simple alternative is use the Partitioning native resource from MySQL.
In this blog there is another article about it: Partitioning Tables
Using memory instead of local disks will allow a much faster creation of temporary tables on MySQL.
First, create the mountpoint:
1 |
mkdir /tmp/mysqltmp |
Add this line in your /etc/fstab:
1 |
tmpfs /tmp/mysqltmp tmpfs rw,uid=mysql,gid=mysql,size=1G,nr_inodes=10k,mode=0700 0 0 |
Make sure to adjust the size parameter. For reference, use 08~10% from physical memory.
Finally, you need to define this path in /etc/my.cnf and restart MySQL:
1 |
tmpdir = /tmp/mysqltmp |
This is one of most important parameters in /etc/my.cnf. It defines how much memory InnoDB can use.
I recommend something like 70~80% from physical memory:
1 |
innodb_buffer_pool_size=14G |
Ref.: http://www.mysqlperformanceblog.com/2007/11/03/choosing-innodb_buffer_pool_size/
It’s the /etc/my.cnf sample for a server with 24GB RAM:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
[mysqld] # paths datadir = /var/lib/mysql/data tmpdir = /tmp/mysqltmp # network connect_timeout = 60 wait_timeout = 28800 max_connections = 2048 max_allowed_packet = 64M max_connect_errors = 1000 # limits tmp_table_size = 512M max_heap_table_size = 256M table_cache = 512 # logs log_error = /var/log/mysql/mysql-error.log slow_query_log_file = /var/log/mysql/mysql-slow.log slow_query_log = 1 long_query_time = 20 # innodb innodb_data_home_dir = /var/lib/mysql/data innodb_data_file_path = ibdata1:128M;ibdata2:128M:autoextend:max:4096M innodb_file_per_table = 1 innodb_status_file = 1 innodb_additional_mem_pool_size = 128M innodb_buffer_pool_size = 14G innodb_flush_method = O_DIRECT innodb_io_capacity = 2000 innodb_flush_log_at_trx_commit = 2 innodb_support_xa = 0 innodb_log_file_size = 512M innodb_log_buffer_size = 128M # experimental innodb_stats_update_need_lock = 0 # other stuff event_scheduler = 1 query_cache_type = 0 |
If you need, there are some references about MySQL parameters:
http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html
http://www.mysqlperformanceblog.com/
Thanks, this post help me a lot.
IRC#zabbix <Richlv> you suggest innodb_file_per_table=1 without explanation on why. technically, single tablespace should be faster than one file per table…
You’re right, shared tablespace could be faster, but it can bring some problems too, mainly if you have huge tables with 1 billion rows.
I added in the post some explanation about my suggestion.
Another thing you probably want to consider is enabling large-pages in my.cnf and configuring huge pages within the linux OS.
Thank you very much for sharing this information and your knowledge. I took your my.cnf as base for a new Zabbix installation.
Thanks.
Actually we are facing an
> issue in Mysql DB. The db size is growing enormously and the Ibdata size is
> growing very fast .
>
> a. Could the IBDATA file be structured in a different way that makes it
> more efficient with available disk space, or at least make it more
> predictable how its space requirements are going to behave during an upgrade
> process
>
b. Is there any way to assess how much “available” space there is in
an IBDATA file, and to predict the space requirements of a given IBDATA file
during an upgrade
c. Is it good to set the max value for the Ibdata file along with
autoextend.
d. Below is the partition type of the machine.
>
> Filesystem Type 1K-blocks Used Available Use% Mounted on
>
> /dev/mapper/Linux-root
>
> ext3 6063688 1929100 3826572 34% /
>
> /dev/sda1 ext3 194442 13548 170855 8% /boot
>
> /dev/mapper/Socrates-db
>
> ext3 20158332 16000272 3134060 84% /db
>
> none tmpfs 1037072 0 1037072 0% /dev/shm
>
> /dev/mapper/Socrates-dips
>
> ext3 66571944 15435132 47755156 25% /export/dips
>
> /dev/mapper/Socrates-home
>
> ext3 9063180 4161576 4441216 49% /export/home
>
> /dev/mapper/Socrates-socrates
>
> ext3 17126228 4905840 11350400 31% /export/socrates
>
> /dev/mapper/Linux-opt
>
> ext3 6063688 1115964 4639708 20% /opt
>
> /dev/mapper/Linux-usr
>
> ext3 6418360 3233416 2858904 54% /usr
>
> /dev/mapper/Linux-var
>
> ext3 6063688 3880188 1875484 68% /var
>
>
>
> Thanks for the help.
Hi .
partitioning is not useful solution since the partitioned table does not support QCache in partitioned tables
http://dev.mysql.com/doc/refman/5.6/en/partitioning-limitations.html
“Query cache not supported” field
please guide me to choose proper way for offloading server DB
white_wire, I suggest you disable QueryCache, because this isn’t useful in Zabbix because the ratio of cache hits is very low (about 70%)