Can single store work with 100 Columns and 20 million rows?

The challenge I have is this: The dataset I have is a result of Joining of 35+ tables to create 1 SQL view which is then run overnight and dumps result into a table. The resulting table is in MS SQL with 100 columns and approaching 20 million rows (16GB in size) . Each month this data grows by 1.5 Million Rows. All of these 100 columns are needed for reporting purposes. All data is pulled into Tibco Spotfire which takes around 45 minutes to load. Currently everything is running OnPrem.

Can SingleStore Free version help me with faster load times? Can it cut that 45 minute load time down?

I am planning to spin up a free version of SingleStore OnPrem. Will 4 credits be enough to achieve fast load times for data analytics? I did try out the docker version, but I didn’t find any impressive performance while using it with Spotfire. But again that was a docker version so can’t really tell true performance.

I plan on using setup like this:
1 Master Node 32GB
3 Leaf Nodes 32GB each

Any help is appreciate, main question is will data load faster in Single Store than the Current MS SQL setup?

Yes, SingleStore should be able to load that data set easily, and do it faster than MS SQL, because it’s a distributed system and has a parallel loader (using PIPELINES).

If the data set is broken into several files, and you use pipelines to load it, you should get pretty fast results. Try having at least one file per leaf core, in a source folder, and start a pipeline to load it.

The challenge I have is this: The dataset I have is a result of Joining of 35+ tables to create 1 SQL view which is then run overnight.