Hi Team.
What is the best way to extract data from MemSQL datasource to CSV file. I see there are mysqldump, select * into OUTFILE etc but the data is so huge that these options are taking time to complete.
MySQL dump is not going to be suitable for very large exports. I’d suggest using SELECT INTO OUTFILE but run multiple concurrent queries to partition the data by some appropriate criteria, like date range. That way you can parallelize the reading of data and writing it to output files. Make sure the output folder is on a device (like a fast SSD or RAID device) that is fast enough to handle the load.
Apart from Select * into OUTFILE, is there any utility avaialble on memsql end to ease the job?
Theres select into s3
if you want the data to go into S3. We’ll have more options coming for this. If you want a single CSV file, SELECT INTO OUTFILE
is the only option and does exactly what you want.
Looks like our team is giving a good number of suggestions for you to do this!
Regarding the SELECT TO OUTFILE
statement, this is specifically how you do it:
SELECT * FROM table INTO OUTFILE '/path/to/your/output.csv' FIELDS TERMINATED BY '|' ENCLOSED BY '"' LINES TERMINATED BY '\n';
Hi Sushma,
If you are connected to Master Ag, you can use select into OUTFILE and dump the file locally.
If you are on a remote server, i usually use
mysql -h memsqlmasterAg --batch --database=xxx -Ne"select * from ${tablename} where id between ${i} and ${j};" > /data_export/memsql/${tablename}/${a}${tablename}${i}_${j}.tsv
I usually use a batch script. If it’s a large table, i’ll do a count and iterate through record using a batch size. Usually 1M record is fine.
When using a batch script i also zip the file then send it to S3 if necessary.