If i have 1TB of data, How much memory do i need?
Whether the data is loaded into the memory after startup?
If you have 1TB of data you likely want to put most of that data into columnstore tables (SingleStoreDB Cloud · SingleStore Documentation). Columnstores are disk backed so you don’t need to worry about memory capacity for data storage, only memory capacity for running queries. This depends on your use case of course, but I would start there.
-Adam
Thanks,but i want to use by rowstore
- So how much memory should I prepare?
- After booting up, whether all the data will be loaded into memory in advance?
- It depends on the number of indexes your going to need and if you plan to run with redundancy 1 or 2 (so we keep a 2nd copy of the table in memory). Creating a lot of indexes on your tables can cause its in-memory usage to be larger then your on-disk usage. Each indexes takes up roughly 40 bytes of memory per row (no matter the number of columns in the table) to give you an idea. One think you can do to get an idea is load up part of your data into a small cluster and check on the memory use (information_schema.table_statistics can show you how much memory each table it using)
- On startup, all data for rowstore tables is loaded into memory
-Adam
Thanks for your reply, i got it.