Hello,
Thank you for reaching out and welcome to our community!
For pipeline errors and batches (e.g., anything from information_schema.pipelines_* tables) we don’t yet expose that in the exporter, but it’s on our roadmap and we plan to release it with the next major version of SingleStore.
Our samples
endpoint that you found exposes our mv_activities
tables, which essentially is a sample of all activities running in the system as well as their respective resource usage (e.g., memory, disk, network… etc). It includes pipeline activities.
All in all, if you are looking for pipeline batch information and other errors, you will have to wait until the next release, but if you looking for resource data for pipeline activities, you can expose that to Prometheus if you are running version 7.3:
Starting in 7.3, we allow you to read samples
in SingleStore’s monitoring solution as Prometheus-compatible metrics, but this setting is not enabled by default because SingleStore’s monitoring solution (that uses a SingleStore DB as a source for the monitoring data) expects this data in the non-metric format.
Our documentation is underway for this, but I can show you a sample prometheus.yml of how you would do this (see example below):
The high_cardinality_metrics
setting is what dictates whether you want to collect data around query and resource usage (e.g., samples) as well. As the name alludes to, for clusters with large query workloads, the data volume when using high_cardinality_metrics =true
may be large, so you should benchmark your disk usage and set a retention policy if necessary.
prometheus.yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'memsql'
metrics_path: '/cluster-metrics'
params:
high_cardinality_metrics: ['true']
static_configs:
- targets: ['<exporter_host>:9104']