Hi, the documentation of memsql 7 has an option to deploy 2 hosts 4 nodes cluster though I have a couple of things that are missing:
I see there is one leaf that is also the Master, can I make the other leaf the child ? I dont see the role in the config
Assuming #1 is possible, why is there no “high_availability: true” ? If there are 2 leaves with replication factor of 2 and 2 aggregators then this should be a high availability solution
Thank you for using MemSQL. Would you please tell me more about your configuration? I would like to see your result of this command “memsql-admin list-nodes”.
Yes a configuration of 2 hosts with MA, CA and two leaves is supported.
I tried exactly the same configuration like yours with 2 aggregators and 2 leaves. I created one additional node on each of the MA and C. Then I added the new nodes as leaf nodes. I ran REBALANCE PARTITIONS then remove the old leaf nodes. Now I have two hosts, one with the MA and one leaf, the other has the CA and another leaf. This is my test environment so I didn’t care about my data that much. Please make good backups if you want to keep your existing data.
To test my HA is still there, I remove one the leaf in my MA. I can still query all data of my database.
In MemSQL Studio window, you can see the topology of your nodes like this.
Hi @ywang I am trying to do this configuration and its not working on my end. I have right now a 4 nodes cluster running on 4 hosts, here is the list-nodes:
±-----------±-----------±----------------------------±-----±--------------±-------------±--------±---------------±-------------------±-------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
±-----------±-----------±----------------------------±-----±--------------±-------------±--------±---------------±-------------------±-------------+
| 3EFC99AACD | Master | 10.1.2.5 | 3306 | Running | True | 7.0.16 | Online | | 0.0.0.0 |
| B79F944FDD | Aggregator | memsql-agg-01.server.local | 3306 | Running | True | 7.0.16 | Online | | 0.0.0.0 |
| A337CB882E | Leaf | memsql-leaf-01.server.local | 3306 | Running | True | 7.0.16 | Online | 1 | 0.0.0.0 |
| A805572DF1 | Leaf | memsql-leaf-02.server.local | 3306 | Running | True | 7.0.16 | Online | 2 | 0.0.0.0 |
±-----------±-----------±--------------------------±-----±--------------±-------------±--------±---------------±-------------------±-------------+
I want to take one of the leaves and give it another role to be an aggregator. When I run the add-aggregator command I get the error:
stderr: Node already has role Leaf
Glad to see you here again. Have you try these steps yet? As an example, A337CB882E on memsql-leaf-01.server.local:3306 is the one that you want to make it as an aggregator.
You do need to use a different port number for the aggregator node on the same host since the original port number 3306 is used as a leaf node for that host. I suspect that is the reason. If this is not the reason, can you send me the exact command you used to add the aggregator?
@ywang Yes it might be the reason, I am running this command: memsql-admin add-aggregator --memsql-id A805572DF1
But according to what you say do I need to create another node ? or I just add-aggregator with a different port ? The error I get is saying that this node already has a role, its not telling me anything about the port
A node and setting its role are two different things. So yes, you do need to create a new node on a different port, then add it as a leaf or an aggregator.
This is what I did when I tested your scenario. Let’s say I have MA, CA, LEAF1 and LEAF2 already on 4 hosts.
Create a new node LEAF-on-MA on the MA host + different port
Add the new node as a LEAF node
Remove LEAF1 and rebalance. Now I have MA, LEAF-on_MA, CA and LEAF2.
Then I do the same steps to add LEAF-on-CA. At the end I have MA, LEAF-on_MA on one host. CA and LEAF-on-CA on another host with all the partitions of my db.
I am sure there are other combinations of steps that can get you to the same steps.
Ok, the server has a mount with the data dir in a different location. When I installed the node I used the --datadir argument. Do I need to indicate it here as well ? Wouldnt it overwrite the data directory ?