Preventing hard failure on memory exceeded

Hi,

We have some queries that occassionally explode out to using a lot of memory. This will be an ongoing risk with the app, statistics might get out of date or we might get a surprise in our data. The problem is that when the memory limits are exceeded then rather than degrading performance SingleStore will fail the query, this means our processes are unstable and at risk of not working at all.

I would be much happier if it could just take a bit longer rather than completely fail as is the case with other database systems I’ve used. I accept that for storage this doesn’t make sense, but for the query processing is there any setting to have slightly less aggressive memory management?

It’s using a lot of RAM and we can’t really justify scaling up our resources to cope with what appears to be greedy memory usage by the DBMS.

Thanks,
Will

Spilling is implemented for “hash group by” operations, as of 7.5. To enable it, you have to set global variable enable_spilling to ON. Most spilling errors are due to hash group – like 90% in our experience. Spilling for all other memory-heavy operations like hash join and sort are planned for the future, as is on-by-default spilling.

See the discussion here:

The spilling variables will be documented shortly in the list of engine variables.

singlestore> show variables like '%spill%';
+------------------------------------------+-----------+
| Variable_name                            | Value     |
+------------------------------------------+-----------+
| enable_spilling                          | OFF       |
| spilling_node_memory_threshold_ratio     | 0.750000  |
| spilling_query_operator_memory_threshold | 104857600 |
+------------------------------------------+-----------+
3 rows in set (0.00 sec)

singlestore> select @@memsql_version;
+------------------+
| @@memsql_version |
+------------------+
| 7.5.11           |
+------------------+

Hi,

Thanks for the response. I tried moving to 7.5 and setting the enable_spilling to ON and it still failed with memory exceeded. I assume that we’re just using too much for one of the other operations. It’s good to know it is planned for other operations in future, I will try and keep track of this feature.

For now we’re going to just try and simplify the query to do more of the work prior to singlestore.

Thanks for your help,
Will

Hi William,

If your willing to share an explain or profile of the types of queries your running into high memory use on it could help direct the feature work we do in this area in the future.

-Adam