i use spark-streaming to write data into memsql, along with the data getting bigger, the performance will geting slower , see the pic below.
when the time process one batch from 4 seconds to more than 50 seconds, there is only 20,000,000 rows,and each row has 13 fields , the total memery usage is 12GB
is there any idea to improve the performance ?