How to get column names while pulling data from Singlestore into a CSV file?

While pulling data from SingleStore to S3 using SELECT INTO S3, columns names are not included in CSV output files. What is the best way to include column names?

There’s no direct way to do this so you’d have to use some other application logic to deal with it. If absolutely must get column names into the output file and you want to use SELECT INTO S3 then you can use a UNION ALL in your query to UNION a one-row result set with the headers with the rest of your data. Like this:

create table t(a int, b varchar(30));

insert t values(1, "red"),(2, "blue");

select * from (
  select "a" , "b"
  union all
  select a, b
  from t
);

Output:

+------+------+
| a    | b    |
+------+------+
| a    | b    |
| 2    | blue |
| 1    | red  |
+------+------+

You can then modify a query like this to add on SELECT INTO S3 syntax.

Running above query from Python using singlestoredb connector, column names are coming as part of only one csv not all in the output. I have multiple files getting created like .csv, .csv_0, .csv_1 and so on.

To output to a single CSV file, you need to use aggregations, ORDER BY, or GROUP BY in your SELECT statement. When these are used, the query will run on each leaf but the result will be collected on the aggregator and output as a single file.12

For example, if you add ORDER BY to your query, it will output to a single file named <bucket/target> instead of separate partition files named <bucket/target>_<partition ID>