You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running an Aurora Global Cluster in two regions, and have a set of ProxySQL instances configured as a cluster in each region.
Over the last 30 days or so, the SQLite3 database has grown to consume a large amount of the available memory on each of the instances in the secondary region of the Aurora Global Cluster. The ProxySQL instances in the primary region are not affected.
I have narrowed it down to the contents of a single table that is growing rapidly.
mysql> select hostname, count(*) from mysql_server_aws_aurora_failovers group by hostname;
+--------------------------+----------+
| hostname | count(*) |
+--------------------------+----------+
| aurora_cluster_1_primary | 839764 |
| aurora_cluster_2_primary | 839754 |
| aurora_cluster_3_primary | 839777 |
| aurora_cluster_4_primary | 839797 |
+--------------------------+----------+
4 rows in set (1.53 sec)
Every time the ProxySQL instance tries to check the status of the Aurora Cluster primary instance it cannot be reached (as that instance is in another region), so an entry is recorded in this table.
To free the memory that is being used by ProxySQL, the contents of this table need to be cleared down and the memory freed.
A regular DELETE statement will clear the contents of the mysql_server_aws_aurora_failovers table, but even that will not free the memory used automatically.
The command VACUUM monitor; needs to be run against the ProxySQL databases to free the memory that has been consumed.
There is no ProxySQL parameter to automatically vacuum the free space in the monitor schema, but there is a variable to automatically clean the stats schema.
It would be really useful to have an automated way to maintain this log table based on a global variable, or to at least have a global variable that will automatically vacuum the monitor schema in the same way as for the stats schema - then the table contents can be maintained by a script based on our retention requirements.
Unless there is another way to maintain this table, or reduce the amount of data that is logged?
Regards,
Dave
The text was updated successfully, but these errors were encountered:
We are running an Aurora Global Cluster in two regions, and have a set of ProxySQL instances configured as a cluster in each region.
Over the last 30 days or so, the SQLite3 database has grown to consume a large amount of the available memory on each of the instances in the secondary region of the Aurora Global Cluster. The ProxySQL instances in the primary region are not affected.
I have narrowed it down to the contents of a single table that is growing rapidly.
Every time the ProxySQL instance tries to check the status of the Aurora Cluster primary instance it cannot be reached (as that instance is in another region), so an entry is recorded in this table.
To free the memory that is being used by ProxySQL, the contents of this table need to be cleared down and the memory freed.
A regular
DELETE
statement will clear the contents of themysql_server_aws_aurora_failovers
table, but even that will not free the memory used automatically.The command
VACUUM monitor;
needs to be run against the ProxySQL databases to free the memory that has been consumed.There is no ProxySQL parameter to automatically vacuum the free space in the
monitor
schema, but there is a variable to automatically clean thestats
schema.It would be really useful to have an automated way to maintain this log table based on a global variable, or to at least have a global variable that will automatically
vacuum
themonitor
schema in the same way as for thestats
schema - then the table contents can be maintained by a script based on our retention requirements.Unless there is another way to maintain this table, or reduce the amount of data that is logged?
Regards,
Dave
The text was updated successfully, but these errors were encountered: