I get the following error:
11:04:58: ERROR 1933 ER_EXTRACTOR_EXTRACTOR_GET_LATEST_OFFSETS: Cannot get source metadata for pipeline kafka_confluent_cloud. Stderr:
2019-07-02 08:04:58.401 Waiting for next batch, 255 batches left until expiration.
2019-07-02 08:04:58.402 Batch starting with new consumer.
Are there more logs that I can start understanding from where the problem is happening?
I have created a simple kafka client on my laptop with the same parameters, and everything is working fine.
At this time pipelines will not accept Java-style/JAAS configs for Kafka clients, but you can still connect to Confluent Cloud using configuration meant for C clients.
Confluent Cloud certificates are issued by large CAs (Digicert etc.) that are installed as standard in most modern operating systems. As such ssl.ca.location is typically not required
So I have tried to run the same configuration without the ssl.ca.location. But still I get the same error.
Typically kafka clients scan the PEM files in /usr/local/etc/openssl/ to find the correct CA for every connection. However, for MemSQL pipelines this is not automatic. You will still need to provide the correct ssl.ca.location config.
If you are having trouble finding the correct CA in that folder, the pre-installed PEM files can be concatenated together into a single file, but keep in mind that the file needs to be present at a consistent location on every host in your MemSQL cluster.