3

i have upgraded from mariadb 10.1.36 to 10.4.8 and i can see mysterious increasing ram ussage on that new version. I also edited innodb_buffer_pool_size ant seems there is no effect if its set to 15M or 4G, ram is just slowly increasing. After while it eat whole ram and oom killer kills mariadb and this is repeating.

My server has 8GB RAM and its increasing like 60-150MB per day. Its not terrible but i have around 150 database servers so its huge problem.

I can temporary fix problem by restarting mariadb and its start again.

Info about database server: databases: 200+ tables: 28200(141 per database) average active connections: 100-200 size of stored data: 100-350GB

cpu: 4 ram: 8GB

there is my config:

server-id=101
datadir=/opt/mysql/
socket=/var/lib/mysql/mysql.sock
tmpdir=/tmp/
gtid-ignore-duplicates=True
log_bin=mysql-bin
expire_logs_days=4
wait_timeout=360
thread_cache_size=16
sql_mode="ALLOW_INVALID_DATES"
long_query_time=0.8
slow_query_log=1
slow_query_log_file=/opt/log/slow.log
log_output=TABLE
userstat = 1
user=mysql
symbolic-links=0
binlog_format=STATEMENT
default_storage_engine=InnoDB
slave_skip_errors=1062,1396,1690innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=4G
innodb_buffer_pool_instances=5
innodb_log_file_size=1G
innodb_log_buffer_size=196M
innodb_flush_log_at_trx_commit=1
innodb_thread_concurrency=24
innodb_file_per_table
innodb_write_io_threads=24
innodb_read_io_threads=24
innodb_adaptive_flushing=1
innodb_purge_threads=5
innodb_adaptive_hash_index=64
innodb_flush_neighbors=0
innodb_flush_method=O_DIRECT
innodb_io_capacity=10000
innodb_io_capacity_max=16000
innodb_lru_scan_depth=1024
innodb_sort_buffer_size=32M
innodb_ft_cache_size=70M
innodb_ft_total_cache_size=1G
innodb_lock_wait_timeout=300
slave_parallel_threads=5
slave_parallel_mode=optimistic
slave_parallel_max_queued=10000000
log_slave_updates=on
performance_schema=on
skip-name-resolve
max_allowed_packet = 512M
query_cache_type=0
query_cache_size = 0
query_cache_limit = 1M
query_cache_min_res_unit=1K
max_connections = 1500
table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
open_files_limit=1020000
collation-server = utf8_general_ci
character-set-server = utf8
log-error=/opt/log/error.log
log-error=/opt/log/error.log
pid-file=/var/run/mysqld/mysqld.pid
malloc-lib=/usr/lib64/libjemalloc.so.1

6 Answers6

3

I solved it! The problem is memory allocation library.

If you do this SQL query:

SHOW VARIABLES LIKE 'version_malloc_library';

You must to get value "jemalloc" library. If you get only "system", you may have problems.

To change that, you need edit any .conf file in this directory:

/etc/systemd/system/mariadb.service.d/

There, add this line:

Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1"

(this library file may be in other folder)

Then you must to restart mysqld

service mysqld stop && systemctl daemon-reload && service mysqld start

GABRIEL
  • 31
  • 2
  • Yes, im also testing this. But im not 100% sure if its fixed. Still see uptrend of ram ussage, but maybe mariadb didnt hit its configured limits. Its true that ram ussage with jemalloc is more softly than before (when system allocator was used) I need more time for testing to prove it. – Jakub Barták Mar 03 '20 at 13:18
1

You got carried away in increasing values in my.cnf.

Many of the caches grow until hitting their limit, hence the memory growth you experienced.

What is the value from SHOW GLOBAL STATUS LIKE 'Max_used_connections';? Having a large max_connections accentuates several of the other values; lower it.

But perhaps the really bad one(s) involve table caches -- which have units of tables, not bytes. Crank these down a lot:

table_open_cache=64K
innodb_open_files=64K
table_definition_cache=64K
Rick James
  • 135,179
  • 13
  • 127
  • 222
0

I have exactly the same problem. Is it due to a bad configuration? Or is it a bug of the new version?

mariadb 10.1 was updated to 10.3 just when I upgraded Debian 9 to Debian 10. I tried solve the problem with mariadb 10.4 but nothing changed.

I want to downgrade version but I think it's neccesary dump all database and restore it, and that means being hours without service.

I don't think Debian 10 has to do with the issue

  • Im investigating this issue for a very long time and i think its somehow related to some table cache or open table cache. It found that if i run mysqlcheck than ram dramtically increase and mariadb never release it. I also had situation when a run mysqlcheck and mariadb was killed by OOM due to high ram ussage. – Jakub Barták Mar 02 '20 at 10:06
  • Im also testing alternative memory allocators (jemalloc) and its true that there is a different behavior of ram ussage, from my view when jemalloc is used ram is from time-to-time released but i can still see slowly increasing trend. I will post update when i complete this test – Jakub Barták Mar 02 '20 at 10:10
  • Some of MySQL's caches grow to their limits, but don't shrink. Hitting OOM says that you raised the limits too high. – Rick James Dec 17 '20 at 16:38
0

Please read my previous comments about alternative memory allocators...

When jemalloc is used: Jemalloc used

When default memory allocator used: enter image description here

0

Try with tcmalloc

Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4.5.3.1"

GABRIEL
  • 31
  • 2
0

TL;DR: take MariaDB package from mariadb.org.

I had similar issue with default 10.5.18-MariaDB-0+deb11u1 on Debian 11. It consumed ~5 Mb RAM every hour.

This is really mysterious for two reasons:

  • We have exactly same OS/MariaDB on many other servers and it works well
  • This particular machine runs very small database (300Kb if backup to SQL), so even if fully cached with all indexes, how much space it could need? 500Kb? 1Mb? 100Mb? But memory consumption slowly grew higher and higher every day.

I tried advice to use jemalloc, this worked only partially (memory consumption grows slower, but still grows).

All other advice did not helped either. They reduce memory consumption in "normal" mariadb, but could not help against memory leaks (which we probably have). It's a cure for other kind of problems.

What really helped is to upgrade MariaDB to version from mariadb.org official site. (there are precompiled packages for all popular distributions) My MariaDB uptime now is over 2 month. And I log memory usage every hour:

My log now is:

Sun Apr 23 14:17:01 UTC 2023
     Active: active (running) since Sun 2023-04-23 13:42:29 UTC; 34min ago
     Memory: 184.4M
....
Fri May 26 10:17:01 UTC 2023
     Active: active (running) since Sun 2023-04-23 13:42:29 UTC; 1 months 2 days ago
     Memory: 235.1M
....
Fri Jun 30 10:17:01 UTC 2023
     Active: active (running) since Sun 2023-04-23 13:42:29 UTC; 2 months 6 days ago
     Memory: 236.4M

It still 'eats' some memory with time but slower and slower. (50Mb for 1st month, and 1Mb for 2nd month vs 5Mb/hr with original Debian MariaDB).

yaroslaff
  • 81
  • 5