Startup probe failed: Error 1045: Access denied for user 'root'@'localhost' (using password: YES)

I am deploying SS Cluster using Kubernetes but facing issue
Error:

test@cloudlyte:/mnt/home/test/singlestore-operator$ kubectl run -it --rm --image=mysql:5.7 --restart=Never mysql-client -- mysql -u admin -h 100.121.44.231 -P 3306 -p ssoperator
If you don't see a command prompt, try pressing enter.

ERROR 1045 (28000): Access denied for user 'admin'@'10.42.0.1' (using password: NO)
pod "mysql-client" deleted
pod default/mysql-client terminated (Error)

Describe:

test@cloudlyte:/mnt/home/test/singlestore-operator$ kubectl describe pod/node-sdb-cluster-master-0 -n singlestore-op
Name:             node-sdb-cluster-master-0
Namespace:        singlestore-op
Priority:         0
Service Account:  default
Node:             cloudlyte/100.121.44.231
Start Time:       Wed, 17 Jul 2024 09:34:53 +0000
Labels:           app.kubernetes.io/component=master
                  app.kubernetes.io/instance=sdb-cluster
                  app.kubernetes.io/name=memsql-cluster
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=node-sdb-cluster-master-65475b698f
                  memsql.com/role-tier=aggregator
                  memsql.com/workspace=singlestore-central
                  optional=label
                  statefulset.kubernetes.io/pod-name=node-sdb-cluster-master-0
Annotations:      hash.configmap.memsql.com/node-sdb-cluster-master: b9261c9bdea35691d3ec857493bdbc229297a815c37a546f70d27a4de4c4ded2
                  hash.configmap.memsql.com/node-sdb-cluster-master-gv: dac4f32e5d27220f88279f633118d966b28e2ae18674d60c5a8c6f62a91eaaf7
                  optional: annotation
                  prometheus.io/port: 9104
                  prometheus.io/scrape: true
Status:           Running
IP:               10.42.0.177
IPs:
  IP:           10.42.0.177
Controlled By:  StatefulSet/node-sdb-cluster-master
Containers:
  node:
    Container ID:  docker://213b9c09518512c4b4c16746b2e19a4dfd5a8e5c0122689730d422857f597c53
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          <none>
    Host Port:     <none>
    Command:
      /etc/memsql/scripts/startup
    State:          Running
      Started:      Wed, 17 Jul 2024 09:35:02 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  16Gi
    Requests:
      cpu:      4
      memory:   16Gi
    Readiness:  exec [/etc/memsql/scripts/readiness-probe] delay=10s timeout=10s period=10s #success=1 #failure=3
    Startup:    exec [/etc/memsql/scripts/startup-probe] delay=3s timeout=300s period=3s #success=1 #failure=2147483647
    Environment:
      RELEASE_ID:
      ROOT_PASSWORD:     <set to the key 'ROOT_PASSWORD' in secret 'sdb-cluster'>  Optional: false
      PRE_START_SCRIPT:  /etc/memsql/scripts/update-config-script
      BASH_ENV:          /home/memsql/.memsqlbashenv
      MALLOC_ARENA_MAX:  4
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
  exporter:
    Container ID:  docker://f63e21240f71d9e688a52602c685408c3217ac024becc9a9325a61478359a4ab
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          9104/TCP
    Host Port:     0/TCP
    Command:
      /etc/memsql/scripts/exporter-startup-script
    State:          Running
      Started:      Wed, 17 Jul 2024 09:35:02 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  180Mi
    Environment:
      RELEASE_ID:
      DATA_SOURCE_NAME:  <set to the key 'DATA_SOURCE_NAME' in secret 'sdb-cluster'>  Optional: false
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-storage-node-sdb-cluster-master-0
    ReadOnly:   false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      node-sdb-cluster-master
    Optional:  false
  additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sdb-cluster-additional-files
    Optional:  true
  additional-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster-additional-secrets
    Optional:    true
  global-additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      global-additional-files
    Optional:  true
  credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age   From                     Message
  ----     ------                  ----  ----                     -------
  Normal   Scheduled               11m   default-scheduler        Successfully assigned singlestore-op/node-sdb-cluster-master-0 to cloudlyte
  Normal   SuccessfulAttachVolume  11m   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-945d8c60-c46e-4681-95f8-efa3d6b43a7e"
  Normal   Pulled                  11m   kubelet                  Container image "singlestore/node:alma-8.5.27-6ef11d2e11" already present on machine
  Normal   Created                 11m   kubelet                  Created container node
  Normal   Started                 11m   kubelet                  Started container node
  Normal   Pulled                  11m   kubelet                  Container image "singlestore/node:alma-8.5.27-6ef11d2e11" already present on machine
  Normal   Created                 11m   kubelet                  Created container exporter
  Normal   Started                 11m   kubelet                  Started container exporter
  Warning  Unhealthy               10m   kubelet                  Startup probe failed: No valid nodes to choose from
[2024-07-17 09:35:05 startup-probe] Aborting due to query failure: 'SHOW DATABASES EXTENDED'
  Warning  Unhealthy  10m  kubelet  Startup probe failed: No valid nodes to choose from
[2024-07-17 09:35:08 startup-probe] Aborting due to query failure: 'SHOW DATABASES EXTENDED'
  Warning  Unhealthy  10m  kubelet  Startup probe failed: Error 1045: Access denied for user 'root'@'localhost' (using password: YES)
[2024-07-17 09:35:14 startup-probe] Aborting due to query failure: 'SHOW DATABASES EXTENDED'

Logs:

test@cloudlyte:/mnt/home/test/singlestore-operator$ kubectl logs node-sdb-cluster-master-0 -n singlestore-op
Defaulted container "node" out of: node, exporter
WARNING: define MAXIMUM_MEMORY to set the maximum_memory setting in the SingleStore DB node
memsqlctl will perform the following actions:
  · Update configuration setting on node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A on port 3306
    - Update node config file with setting java_pipelines_java11_path=/usr/lib/jvm/java-11-openjdk-11.0.23.0.9-3.el8.x86_64/bin/java

Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled

Updating node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
✓ Updated node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
Running pre-start script: /etc/memsql/scripts/update-config-script
memsqlctl will perform the following actions:
  · Update configuration setting on node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A on port 3306
    - Update node config file with setting unmanaged_cluster=1

Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled

Updating node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
✓ Updated node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
2024-07-17 09:35:04.969   INFO: Thread -1 (ntid 82, conn id -1): memsqld_main: ./memsqld: initializing
2024-07-17 09:35:05.078   INFO: Thread -1 (ntid 82, conn id -1): memsqld_main: ./memsqld: initializing
No valid nodes to choose from
[2024-07-17 09:35:05 startup-probe] Aborting due to query failure: 'SHOW DATABASES EXTENDED'
memsqlctl will perform the following actions:
  · Update configuration setting on node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A on port 3306
    - Update node config file with setting maximum_memory=14745

Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled

Updating node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
✓ Updated node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
memsqlctl will perform the following actions:
  · Update configuration setting on node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A on port 3306
    - Update node config file with setting minimal_disk_space=5120

Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled

Updating node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
✓ Updated node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
memsqlctl will perform the following actions:
  · Update configuration setting on node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A on port 3306
    - Update node config file with setting tls_version=TLSv1.2

Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled

Updating node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
No valid nodes to choose from
[2024-07-17 09:35:08 startup-probe] Aborting due to query failure: 'SHOW DATABASES EXTENDED'
✓ Updated node config file for node with node ID 655CF6610116BB1D77BEEF85039D89DA30B8EA9A
[memsqld_safe] 2024/07/17 09:35:08 Running command #1 `/opt/memsql-server-8.5.27-6ef11d2e11/memsqld --defaults-file /var/lib/memsql/instance/memsql.cnf --user 999`
2024-07-17 09:35:08.683   INFO: Thread -1 (ntid 249, conn id -1): memsqld_main: ./memsqld: initializing
2024-07-17 09:35:08.794   INFO: Thread -1 (ntid 249, conn id -1): memsqld_main: ./memsqld: initializing
2024-07-17 09:35:09.045   INFO: Thread 115121 (ntid 249, conn id -1): SetEffectiveUser: Skipping setuid because we are already user '999' (uid 999, gid 998)
2024-07-17 09:35:09.049   WARN: Thread 115121 (ntid 249, conn id -1): SetupDefaultBlobCacheSize: Low total disk space (100220 MB)! Setting @@maximum_blob_cache_size_mb to 40960 MB
2024-07-17 09:35:09.377   INFO: Thread 115121 (ntid 284, conn id -1): CommandLoop: Entering command loop
2024-07-17 09:35:10.632   INFO: Thread 115121 (ntid 249, conn id -1): InitializeOpenSSL: Initializing OpenSSL 3.0.7 1 Nov 2022
2024-07-17 09:35:10.637   INFO: Thread 115121 (ntid 249, conn id -1): SetSSLCiphers: Supported SSL ciphers: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA
2024-07-17 09:35:10.637   INFO: Thread 115121 (ntid 249, conn id -1): InitMemSqlEngine: SingleStoreDB version hash: 6ef11d2e11f91fc8678a63a15e8f8798f807a7f3 (Fri Jun 28 18:10:15 2024 -0700)
2024-07-17 09:35:10.637   INFO: Thread 115121 (ntid 249, conn id -1): InitMemSqlEngine: SingleStoreDB build flavor: production
Initializing OpenSSL 1.0.2zj-fips  30 Jan 2024
Initializing OpenSSL 1.0.2zj-fips  30 Jan 2024
Initializing OpenSSL 1.0.2zj-fips  30 Jan 2024
2024-07-17 09:35:10.724   WARN: Thread 115121 (ntid 249, conn id -1): InitMemSqlEngine: use_memfd_create is set to true
2024-07-17 09:35:10.747   INFO: Thread 115119 (ntid 399, conn id -1): TryIncreaseThreadPriority: Couldn't increase GC thread scheduling priority. Continuing at normal priority
2024-07-17 09:35:10.747   INFO: Thread 115118 (ntid 400, conn id -1): TryIncreaseThreadPriority: Couldn't increase GC thread scheduling priority. Continuing at normal priority
2024-07-17 09:35:10.766   WARN: Thread 115107 (ntid 411, conn id -1): ValidateAndParseAuthConfig: 2512 JWT config file path is not set
2024-07-17 09:35:10.840   INFO: Thread 115121 (ntid 249, conn id -1): memsqld_main: ./memsqld: ready for connections.
2024-07-17 09:35:10.841   INFO: Thread 115121 (ntid 249, conn id -1): memsqld_main: Version:  '8.5.27'  Socket:  '/var/lib/memsql/instance/data/memsql.sock'  Port:  '3306'
2024-07-17 09:35:10.841   INFO: Thread 115121 (ntid 249, conn id -1): memsqld_main: Flavor: 'production'
2024-07-17 09:35:10.869   INFO: Thread 115105 (ntid 429, conn id -1): TrackReport: Report 'cluster-ping' with period(21600)
2024-07-17 09:35:10.870   INFO: Thread 115105 (ntid 429, conn id -1): TrackReport: Report 'usage-telemetry' with period(86400)
2024-07-17 09:35:10.871   INFO: Thread 115121 (ntid 249, conn id -1): CreateDatabase: CREATE DATABASE `memsql` with sync durability / sync input durability, 0 partitions, 0 sub partitions, 0 logical partitions, log file size 16777216.
2024-07-17 09:35:11.495   INFO: Thread 115096 (ntid 471, conn id -1): RecoverRootFile: `memsql` log: Root file recovered with tail term 0x6, tail LSN 0x3c6.
2024-07-17 09:35:11.495   INFO: Thread 115096 (ntid 471, conn id -1): PopulateFileLists: `memsql` log: Populating files.
2024-07-17 09:35:11.499   INFO: Thread 115096 (ntid 471, conn id -1): ComputeInitialLSNs: `memsql` log: The value of 'replay past first torn page' bit is: 1.
2024-07-17 09:35:11.508   INFO: Thread 115096 (ntid 471, conn id -1): TruncateLog: `memsql` log: Log truncation to LSN 0x364 requested.
2024-07-17 09:35:11.508   INFO: Thread 115096 (ntid 471, conn id -1): ComputeInitialLSNs: `memsql` log: Initialized with tail LSN 0x364, hardened LSN 0x364, committed LSN 0x363, fsynced LSN 0x0.
2024-07-17 09:35:11.508   INFO: Replaying snapshot snapshots/memsql_snapshot_v1_0_0: Thread 115096 (ntid 471, conn id -1): ReplaySnapshotFile: Starting snapshot replay snapshots/memsql_snapshot_v1_0_0.
2024-07-17 09:35:11.508   INFO: Replaying snapshot snapshots/memsql_snapshot_v1_0_0: Thread 115096 (ntid 471, conn id -1): ReplaySnapshotFile: Completed snapshot replay.
2024-07-17 09:35:11.508   INFO: Replaying logs for db `memsql`: Thread 115096 (ntid 471, conn id -1): ReplayLogFiles: Beginning replay at LSN 0x0.
2024-07-17 09:35:11.558   INFO: Replaying logs for db `memsql`: filename `logs/memsql_log_v1_0`: offset 0x363030: Thread 115096 (ntid 471, conn id -1): FinishRecovery: Finished recovery for database `memsql`.
2024-07-17 09:35:11.768   INFO: Thread 115121 (ntid 249, conn id -1): PreTransitionToOffline: `memsql` log: Transition started at term 0x7.

Hi Shreyash,

To help you better in this can you kindly let us know if you are deploying SingleStore on a VM or on Managed service k8’s like EKS/GKE/AKS.

Can you send us the manifest files you have used for deploying the cluster.

Please provide the below-mentioned yaml files additionally to take a look at it.
Which version of the operator are you using?

kubeclt get sc -o yaml > sc.yaml
kubectl get pv -o yaml > pv.yaml
kubectl get pods -o yaml > pods.yaml
kubectl get PVC -o yaml > pvc.yaml

Hi @pgaddigopula
I am using Cloudlyte VM
and i am using Helm chart to deploy the singlestore operator
Here is the helm chart structure:
singlestore-operator/
Chart.yaml
values.yaml
templates/
sdb-operator.yaml
sdb-rbac.yaml
sdb-cluster-crd.yaml
singlestore-instance/
Chart.yaml
values.yaml
templates/
sdb-cluster.yaml

values.yaml #singlestore-operator

operator:
  image:
    repository: singlestore/operator
    tag: 3.246.0-da805c0b
  replicaCount: 1

sdb-operator.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sdb-operator
  labels:
    app.kubernetes.io/component: operator
spec:
  replicas: {{ .Values.operator.replicaCount }}
  selector:
    matchLabels:
      name: sdb-operator
  template:
    metadata:
      labels:
        name: sdb-operator
    spec:
      serviceAccountName: sdb-operator
      containers:
        - name: sdb-operator
          image: {{ .Values.operator.image.repository }}:{{ .Values.operator.image.tag }}
          imagePullPolicy: Always
          args: [
            "--merge-service-annotations",
            "--fs-group-id", "5555",
            "--cluster-id", "sdb-cluster"          ]
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "sdb-operator"

sdb-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sdb-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: sdb-operator
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - services
  - endpoints
  - persistentvolumeclaims
  - events
  - configmaps
  - secrets
  verbs:
  - '*'
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - cronjobs
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - apps
  - extensions
  resources:
  - deployments
  - daemonsets
  - replicasets
  - statefulsets
  - statefulsets/status
  verbs:
  - '*'
- apiGroups:
  - memsql.com
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - '*'
- apiGroups:
  - ""
  resources:
  - serviceaccounts
  verbs:
  - get
  - watch
  - list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sdb-operator
subjects:
- kind: ServiceAccount
  name: sdb-operator
roleRef:
  kind: Role
  name: sdb-operator
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: backup-cluster-reader
rules:
- apiGroups: ["migrations.kubevirt.io", "storage.k8s.io"]
  resources: ["migrationpolicies", "storageclasses"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backup-cluster-reader-binding
subjects:
- kind: ServiceAccount
  name: backup
  namespace: singlestore
roleRef:
  kind: ClusterRole
  name: backup-cluster-reader
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backup
  namespace: singlestore
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: backup-reader
  namespace: singlestore
rules:
- apiGroups: ["", "apps", "batch", "autoscaling", "cdi.kubevirt.io", "clone.kubevirt.io", "export.kubevirt.io", "instancetype.kubevirt.io", "kubevirt.io", "migrations.kubevirt.io", "pool.kubevirt.io", "snapshot.kubevirt.io", "vertica.com", "memsql.com", "storage.k8s.io"]
  resources: ["*"]
  verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: backup-reader-binding
  namespace: singlestore
subjects:
- kind: ServiceAccount
  name: backup
  namespace: singlestore
roleRef:
  kind: Role
  name: backup-reader
  apiGroup: rbac.authorization.k8s.io

sdb-cluster-crd.yaml
[sdb-cluster-crd.yaml · SingleStore Documentation]

values.yaml #singestore-instance

cluster:
  license: <Standard-License>
  adminHashedPassword: "*4ACFE3202A5FF5CF467898FC58AAB1D615029441"
  nodeImage:
    repository: singlestore/node
    tag: alma-8.5.27-6ef11d2e11
  redundancyLevel: 1
  aggregatorSpec:
    count: 2
    height: 0.5
    storageGB: 100
    storageClass: rook-ceph-block
  leafSpec:
    count: 1
    height: 0.5
    storageGB: 200
    storageClass: rook-ceph-block

sdb-cluster.yaml

apiVersion: memsql.com/v1alpha1
kind: MemsqlCluster
metadata:
  name: sdb-cluster
spec:
  license: {{ .Values.cluster.license }}
  adminHashedPassword: "{{ .Values.cluster.adminHashedPassword }}"
  nodeImage:
    repository: {{ .Values.cluster.nodeImage.repository }}
    tag: {{ .Values.cluster.nodeImage.tag }}
  redundancyLevel: {{ .Values.cluster.redundancyLevel }}
  serviceSpec:
    type: NodePort
    objectMetaOverrides:
      labels:
        custom: label
      annotations:
        custom: annotations
  aggregatorSpec:
    count: {{ .Values.cluster.aggregatorSpec.count }}
    height: {{ .Values.cluster.aggregatorSpec.height }}
    storageGB: {{ .Values.cluster.aggregatorSpec.storageGB }}
    storageClass: {{ .Values.cluster.aggregatorSpec.storageClass }}
    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label
  leafSpec:
    count: {{ .Values.cluster.leafSpec.count }}
    height: {{ .Values.cluster.leafSpec.height }}
    storageGB: {{ .Values.cluster.leafSpec.storageGB }}
    storageClass: {{ .Values.cluster.leafSpec.storageClass }}
    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

After deploying it using helm command, the master, leaf and aggregator pod are not fully ready

NAME                                READY   STATUS      RESTARTS   AGE
pod/node-sdb-cluster-aggregator-0   0/1     Running     0          3h17m
pod/node-sdb-cluster-leaf-ag1-0     0/1     Running     0          3h17m
pod/node-sdb-cluster-master-0       1/2     Running     0          3h17m
pod/sdb-operator-7796c49b97-trvkr   1/1     Running     0          3h17m

NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/sdb-operator          ClusterIP   10.43.251.74   <none>        9090/TCP,6060/TCP   3h17m
service/svc-sdb-cluster       ClusterIP   None           <none>        3306/TCP            3h17m
service/svc-sdb-cluster-ddl   NodePort    10.43.4.100    <none>        3306:31611/TCP      3h17m
service/svc-sdb-cluster-dml   NodePort    10.43.163.72   <none>        3306:30778/TCP      3h17m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/sdb-operator   1/1     1            1           3h17m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/sdb-operator-7796c49b97   1         1         1       3h17m

NAME                                           READY   AGE
statefulset.apps/node-sdb-cluster-aggregator   0/1     3h17m
statefulset.apps/node-sdb-cluster-leaf-ag1     0/1     3h17m
statefulset.apps/node-sdb-cluster-master       0/1     3h17m

The storage class you are using here is rook-ceph-block. If it is a VM then can you try to create a standard storage class like below, change the yaml files accordingly and let us know if it is working.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/no-provisioner
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer

Also, we don’t recommend deploying SingleStore using HELM.

What is the output of k get pv and k get PVC here?

please send the k describe pod for all the pods along with the describe output of k describe pv and k describe PVC.

@pgaddigopula
I Don’t have permission to use or create another storage class

test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl get pv -n singlestore
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                                            STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-08d9d08e-d34a-469c-9c25-13b49262d88b   256Gi      RWO            Delete           Terminating   vm-registry/prime-c0329d46-2e5e-47b1-87af-9f83df3cffc9           rook-ceph-block   <unset>                          19d
pvc-32600609-f1db-4215-87e8-3a2df6866140   100Gi      RWO            Delete           Terminating   vm-registry/vm1                                                  rook-ceph-block   <unset>                          24d
pvc-7031ee4e-a820-434e-8c61-835e02e66909   100Gi      RWO            Delete           Bound         singlestore/pv-storage-node-sdb-cluster-aggregator-0             rook-ceph-block   <unset>                          4h26m
pvc-b896bb32-49e6-4241-9d10-d254d86d7a66   256Gi      RWO            Delete           Terminating   vm-registry/prime-c0329d46-2e5e-47b1-87af-9f83df3cffc9-scratch   rook-ceph-block   <unset>                          19d
pvc-c2df9ee3-7e35-451b-8a8a-a785eed81675   100Gi      RWO            Delete           Bound         singlestore/pv-storage-node-sdb-cluster-master-0                 rook-ceph-block   <unset>                          4h26m
pvc-dcd2f215-7c42-41ce-bd11-f8b52f3ec531   1Gi        RWO            Delete           Terminating   default/rbd-pvc                                                  rook-ceph-block   <unset>                          35d
pvc-df6bb25d-3ef0-42e9-8650-1c5c9d198abf   200Gi      RWO            Delete           Bound         singlestore/pv-storage-node-sdb-cluster-leaf-ag1-0               rook-ceph-block   <unset>                          4h26m
pvc-f1937fbe-fea3-46f0-9ff0-81c2795931ec   100Gi      RWO            Delete           Terminating   vm-registry/demo                                                 rook-ceph-block   <unset>                          35d
test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl get pvc -n singlestore
NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
pv-storage-node-sdb-cluster-aggregator-0   Bound    pvc-7031ee4e-a820-434e-8c61-835e02e66909   100Gi      RWO            rook-ceph-block   <unset>                 4h29m
pv-storage-node-sdb-cluster-leaf-ag1-0     Bound    pvc-df6bb25d-3ef0-42e9-8650-1c5c9d198abf   200Gi      RWO            rook-ceph-block   <unset>                 4h29m
pv-storage-node-sdb-cluster-master-0       Bound    pvc-c2df9ee3-7e35-451b-8a8a-a785eed81675   100Gi      RWO            rook-ceph-block   <unset>                 4h29m

kubectl describe pod sdb-operator-7796c49b97-trvkr -n singlestore

Name:             sdb-operator-7796c49b97-trvkr
Namespace:        singlestore
Priority:         0
Service Account:  sdb-operator
Node:             cloudlyte/100.121.44.231
Start Time:       Thu, 08 Aug 2024 07:27:23 +0000
Labels:           name=sdb-operator
                  pod-template-hash=7796c49b97
Annotations:      <none>
Status:           Running
IP:               10.42.0.9
IPs:
  IP:           10.42.0.9
Controlled By:  ReplicaSet/sdb-operator-7796c49b97
Containers:
  sdb-operator:
    Container ID:  docker://3e520878435d5e5bb9f6e995713e83743906351012a83b19178c51cf9edf4eda
    Image:         singlestore/operator:3.246.0-da805c0b
    Image ID:      docker-pullable://singlestore/operator@sha256:d8643ed8bcdba692d791fe8fae1d21d95f1c9f65dc331213fc0c192a27aafc21
    Port:          <none>
    Host Port:     <none>
    Args:
      --merge-service-annotations
      --fs-group-id
      5555
      --cluster-id
      sdb-cluster
    State:          Running
      Started:      Thu, 08 Aug 2024 07:27:26 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      WATCH_NAMESPACE:  singlestore (v1:metadata.namespace)
      POD_NAME:         sdb-operator-7796c49b97-trvkr (v1:metadata.name)
      OPERATOR_NAME:    sdb-operator
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cbg6m (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       True
  ContainersReady             True
  PodScheduled                True
Volumes:
  kube-api-access-cbg6m:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

kubectl describe pod node-sdb-cluster-master-0 -n singlestore

Name:             node-sdb-cluster-master-0
Namespace:        singlestore
Priority:         0
Service Account:  default
Node:             cloudlyte/100.121.44.231
Start Time:       Thu, 08 Aug 2024 07:27:34 +0000
Labels:           app.kubernetes.io/component=master
                  app.kubernetes.io/instance=sdb-cluster
                  app.kubernetes.io/name=memsql-cluster
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=node-sdb-cluster-master-65475b698f
                  memsql.com/role-tier=aggregator
                  memsql.com/workspace=singlestore-central
                  optional=label
                  statefulset.kubernetes.io/pod-name=node-sdb-cluster-master-0
Annotations:      hash.configmap.memsql.com/node-sdb-cluster-master: b9261c9bdea35691d3ec857493bdbc229297a815c37a546f70d27a4de4c4ded2
                  hash.configmap.memsql.com/node-sdb-cluster-master-gv: dac4f32e5d27220f88279f633118d966b28e2ae18674d60c5a8c6f62a91eaaf7
                  optional: annotation
                  prometheus.io/port: 9104
                  prometheus.io/scrape: true
Status:           Running
IP:               10.42.0.11
IPs:
  IP:           10.42.0.11
Controlled By:  StatefulSet/node-sdb-cluster-master
Containers:
  node:
    Container ID:  docker://b7e119b37205f6649ad469aaed4f01c6f0568f7abd0e734e39680daff58636bf
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          <none>
    Host Port:     <none>
    Command:
      /etc/memsql/scripts/startup
    State:          Running
      Started:      Thu, 08 Aug 2024 07:27:43 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  16Gi
    Requests:
      cpu:      4
      memory:   16Gi
    Readiness:  exec [/etc/memsql/scripts/readiness-probe] delay=10s timeout=10s period=10s #success=1 #failure=3
    Startup:    exec [/etc/memsql/scripts/startup-probe] delay=3s timeout=300s period=3s #success=1 #failure=2147483647
    Environment:
      RELEASE_ID:
      ROOT_PASSWORD:     <set to the key 'ROOT_PASSWORD' in secret 'sdb-cluster'>  Optional: false
      PRE_START_SCRIPT:  /etc/memsql/scripts/update-config-script
      BASH_ENV:          /home/memsql/.memsqlbashenv
      MALLOC_ARENA_MAX:  4
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
  exporter:
    Container ID:  docker://950bc0c000c6dd77e72a5a315883f8cc6517bf5b47a6f21acf77ab17028d4adc
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          9104/TCP
    Host Port:     0/TCP
    Command:
      /etc/memsql/scripts/exporter-startup-script
    State:          Running
      Started:      Thu, 08 Aug 2024 07:27:44 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  180Mi
    Environment:
      RELEASE_ID:
      DATA_SOURCE_NAME:  <set to the key 'DATA_SOURCE_NAME' in secret 'sdb-cluster'>  Optional: false
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-storage-node-sdb-cluster-master-0
    ReadOnly:   false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      node-sdb-cluster-master
    Optional:  false
  additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sdb-cluster-additional-files
    Optional:  true
  additional-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster-additional-secrets
    Optional:    true
  global-additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      global-additional-files
    Optional:  true
  credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                       From     Message
  ----     ------     ----                      ----     -------
  Warning  Unhealthy  2m11s (x1810 over 4h31m)  kubelet  (combined from similar events): Readiness probe failed: ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
[2024-08-08 11:57:53 readiness-probe] Aborting due to query failure: 'SELECT STATE FROM information_schema.aggregators WHERE NODE_ID IN (SELECT NODE_ID FROM information_schema.lmv_nodes)'

kubectl describe pod node-sdb-cluster-leaf-ag1-0 -n singlestore

Name:             node-sdb-cluster-leaf-ag1-0
Namespace:        singlestore
Priority:         0
Service Account:  default
Node:             cloudlyte/100.121.44.231
Start Time:       Thu, 08 Aug 2024 07:27:38 +0000
Labels:           app.kubernetes.io/component=leaf
                  app.kubernetes.io/instance=sdb-cluster
                  app.kubernetes.io/name=memsql-cluster
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=node-sdb-cluster-leaf-ag1-6dcdc659d
                  memsql.com/availability-group=1
                  memsql.com/role-tier=leaf
                  memsql.com/workspace=singlestore-central
                  optional=label
                  statefulset.kubernetes.io/pod-name=node-sdb-cluster-leaf-ag1-0
Annotations:      hash.configmap.memsql.com/node-sdb-cluster-leaf-ag1: 2029d99434dc0ce40fdc963db0a15f07565b98d255b824ed519fb245d3b45baf
                  hash.configmap.memsql.com/node-sdb-cluster-leaf-ag1-gv: e9618892d27a49795a9a99e10cb928a54a3f40588a8e342298c0efc1fc104f67
                  optional: annotation
Status:           Running
IP:               10.42.0.14
IPs:
  IP:           10.42.0.14
Controlled By:  StatefulSet/node-sdb-cluster-leaf-ag1
Containers:
  node:
    Container ID:  docker://807576a9bac13a28a6ca561c779c4841588100c393b3cb43313be8f36844c950
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          <none>
    Host Port:     <none>
    Command:
      /etc/memsql/scripts/startup
    State:          Running
      Started:      Thu, 08 Aug 2024 07:27:44 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  16Gi
    Requests:
      cpu:      4
      memory:   16Gi
    Readiness:  exec [/etc/memsql/scripts/readiness-probe] delay=10s timeout=10s period=10s #success=1 #failure=3
    Startup:    exec [/etc/memsql/scripts/startup-probe] delay=3s timeout=300s period=3s #success=1 #failure=2147483647
    Environment:
      RELEASE_ID:
      ROOT_PASSWORD:     <set to the key 'ROOT_PASSWORD' in secret 'sdb-cluster'>  Optional: false
      PRE_START_SCRIPT:  /etc/memsql/scripts/update-config-script
      BASH_ENV:          /home/memsql/.memsqlbashenv
      MALLOC_ARENA_MAX:  4
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-storage-node-sdb-cluster-leaf-ag1-0
    ReadOnly:   false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      node-sdb-cluster-leaf-ag1
    Optional:  false
  additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sdb-cluster-additional-files
    Optional:  true
  additional-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster-additional-secrets
    Optional:    true
  global-additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      global-additional-files
    Optional:  true
  credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster
    Optional:    true
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                       From     Message
  ----     ------     ----                      ----     -------
  Warning  Unhealthy  4m18s (x5391 over 4h33m)  kubelet  (combined from similar events): Startup probe failed: ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
[2024-08-08 11:57:47 startup-probe] Aborting due to query failure: 'SELECT HOST FROM INFORMATION_SCHEMA.AGGREGATORS WHERE ROLE = 'Leader''

kubectl describe pod node-sdb-cluster-aggregator-0 -n singlestore

Name:             node-sdb-cluster-aggregator-0
Namespace:        singlestore
Priority:         0
Service Account:  default
Node:             cloudlyte/100.121.44.231
Start Time:       Thu, 08 Aug 2024 07:27:38 +0000
Labels:           app.kubernetes.io/component=aggregator
                  app.kubernetes.io/instance=sdb-cluster
                  app.kubernetes.io/name=memsql-cluster
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=node-sdb-cluster-aggregator-5c945fb469
                  memsql.com/role-tier=aggregator
                  memsql.com/workspace=singlestore-central
                  optional=label
                  statefulset.kubernetes.io/pod-name=node-sdb-cluster-aggregator-0
Annotations:      hash.configmap.memsql.com/node-sdb-cluster-aggregator: 2029d99434dc0ce40fdc963db0a15f07565b98d255b824ed519fb245d3b45baf
                  hash.configmap.memsql.com/node-sdb-cluster-aggregator-gv: dac4f32e5d27220f88279f633118d966b28e2ae18674d60c5a8c6f62a91eaaf7
                  optional: annotation
Status:           Running
IP:               10.42.0.15
IPs:
  IP:           10.42.0.15
Controlled By:  StatefulSet/node-sdb-cluster-aggregator
Containers:
  node:
    Container ID:  docker://2f8048d6cdaa841fdb549bd5f85d474cc52c820706d8e96b5d2284354fe13908
    Image:         singlestore/node:alma-8.5.27-6ef11d2e11
    Image ID:      docker-pullable://singlestore/node@sha256:00782f72554701d9cd0ff4d976a235074c80aae8041d99032bfa61b2f92d6494
    Port:          <none>
    Host Port:     <none>
    Command:
      /etc/memsql/scripts/startup
    State:          Running
      Started:      Thu, 08 Aug 2024 07:27:44 +0000
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  16Gi
    Requests:
      cpu:      4
      memory:   16Gi
    Readiness:  exec [/etc/memsql/scripts/readiness-probe] delay=10s timeout=10s period=10s #success=1 #failure=3
    Startup:    exec [/etc/memsql/scripts/startup-probe] delay=3s timeout=300s period=3s #success=1 #failure=2147483647
    Environment:
      RELEASE_ID:
      ROOT_PASSWORD:     <set to the key 'ROOT_PASSWORD' in secret 'sdb-cluster'>  Optional: false
      PRE_START_SCRIPT:  /etc/memsql/scripts/update-config-script
      BASH_ENV:          /home/memsql/.memsqlbashenv
      MALLOC_ARENA_MAX:  4
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/scripts/credentials from credentials (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-storage-node-sdb-cluster-aggregator-0
    ReadOnly:   false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      node-sdb-cluster-aggregator
    Optional:  false
  additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      sdb-cluster-additional-files
    Optional:  true
  additional-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster-additional-secrets
    Optional:    true
  global-additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      global-additional-files
    Optional:  true
  credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  sdb-cluster
    Optional:    true
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                       From     Message
  ----     ------     ----                      ----     -------
  Warning  Unhealthy  4m57s (x5391 over 4h34m)  kubelet  (combined from similar events): Startup probe failed: [2024-08-08 11:57:47 startup-probe] Metadata is not ready

Describe PV:

test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pv pvc-c2df9ee3-7e35-451b-8a8a-a785eed81675 -n singlestore
Name:            pvc-c2df9ee3-7e35-451b-8a8a-a785eed81675
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: rook-ceph.rbd.csi.ceph.com
                 volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-rbd-provisioner
                 volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph
Finalizers:      [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection]
StorageClass:    rook-ceph-block
Status:          Bound
Claim:           singlestore/pv-storage-node-sdb-cluster-master-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        100Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            rook-ceph.rbd.csi.ceph.com
    FSType:            ext4
    VolumeHandle:      0001-0009-rook-ceph-0000000000000002-ed334fc4-8413-484a-8230-eec0ca54fef3
    ReadOnly:          false
    VolumeAttributes:      clusterID=rook-ceph
                           imageFeatures=layering
                           imageFormat=2
                           imageName=csi-vol-ed334fc4-8413-484a-8230-eec0ca54fef3
                           journalPool=replicapool
                           pool=replicapool
                           storage.kubernetes.io/csiProvisionerIdentity=1720009986939-6828-rook-ceph.rbd.csi.ceph.com
Events:                <none>
test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pv pvc-df6bb25d-3ef0-42e9-8650-1c5c9d198abf -n singlestore
Name:            pvc-df6bb25d-3ef0-42e9-8650-1c5c9d198abf
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: rook-ceph.rbd.csi.ceph.com
                 volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-rbd-provisioner
                 volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph
Finalizers:      [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection]
StorageClass:    rook-ceph-block
Status:          Bound
Claim:           singlestore/pv-storage-node-sdb-cluster-leaf-ag1-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        200Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            rook-ceph.rbd.csi.ceph.com
    FSType:            ext4
    VolumeHandle:      0001-0009-rook-ceph-0000000000000002-772308b0-dfe0-4bcf-be4a-29c8e15b119c
    ReadOnly:          false
    VolumeAttributes:      clusterID=rook-ceph
                           imageFeatures=layering
                           imageFormat=2
                           imageName=csi-vol-772308b0-dfe0-4bcf-be4a-29c8e15b119c
                           journalPool=replicapool
                           pool=replicapool
                           storage.kubernetes.io/csiProvisionerIdentity=1720009986939-6828-rook-ceph.rbd.csi.ceph.com
Events:                <none>
test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pv pvc-7031ee4e-a820-434e-8c61-835e02e66909 -n singlestore
Name:            pvc-7031ee4e-a820-434e-8c61-835e02e66909
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: rook-ceph.rbd.csi.ceph.com
                 volume.kubernetes.io/provisioner-deletion-secret-name: rook-csi-rbd-provisioner
                 volume.kubernetes.io/provisioner-deletion-secret-namespace: rook-ceph
Finalizers:      [external-provisioner.volume.kubernetes.io/finalizer kubernetes.io/pv-protection]
StorageClass:    rook-ceph-block
Status:          Bound
Claim:           singlestore/pv-storage-node-sdb-cluster-aggregator-0
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        100Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            rook-ceph.rbd.csi.ceph.com
    FSType:            ext4
    VolumeHandle:      0001-0009-rook-ceph-0000000000000002-c45169de-4e75-4755-b7bc-7bf217a76b51
    ReadOnly:          false
    VolumeAttributes:      clusterID=rook-ceph
                           imageFeatures=layering
                           imageFormat=2
                           imageName=csi-vol-c45169de-4e75-4755-b7bc-7bf217a76b51
                           journalPool=replicapool
                           pool=replicapool
                           storage.kubernetes.io/csiProvisionerIdentity=1720009986939-6828-rook-ceph.rbd.csi.ceph.com
Events:                <none>

Describe PVC:

test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pvc pv-storage-node-sdb-cluster-master-0 -n singlestore
Name:          pv-storage-node-sdb-cluster-master-0
Namespace:     singlestore
StorageClass:  rook-ceph-block
Status:        Bound
Volume:        pvc-c2df9ee3-7e35-451b-8a8a-a785eed81675
Labels:        app.kubernetes.io/component=master
               app.kubernetes.io/instance=sdb-cluster
               app.kubernetes.io/name=memsql-cluster
               memsql.com/role-tier=aggregator
               memsql.com/workspace=singlestore-central
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       node-sdb-cluster-master-0
Events:        <none>
test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pvc pv-storage-node-sdb-cluster-leaf-ag1-0 -n singlestore
Name:          pv-storage-node-sdb-cluster-leaf-ag1-0
Namespace:     singlestore
StorageClass:  rook-ceph-block
Status:        Bound
Volume:        pvc-df6bb25d-3ef0-42e9-8650-1c5c9d198abf
Labels:        app.kubernetes.io/component=leaf
               app.kubernetes.io/instance=sdb-cluster
               app.kubernetes.io/name=memsql-cluster
               memsql.com/availability-group=1
               memsql.com/role-tier=leaf
               memsql.com/workspace=singlestore-central
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      200Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       node-sdb-cluster-leaf-ag1-0
Events:        <none>
test@cloudlyte:/mnt/home/test/singlestore-operator-helm$ kubectl describe pvc pv-storage-node-sdb-cluster-aggregator-0 -n singlestore
Name:          pv-storage-node-sdb-cluster-aggregator-0
Namespace:     singlestore
StorageClass:  rook-ceph-block
Status:        Bound
Volume:        pvc-7031ee4e-a820-434e-8c61-835e02e66909
Labels:        app.kubernetes.io/component=aggregator
               app.kubernetes.io/instance=sdb-cluster
               app.kubernetes.io/name=memsql-cluster
               memsql.com/role-tier=aggregator
               memsql.com/workspace=singlestore-central
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      100Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       node-sdb-cluster-aggregator-0
Events:        <none>

Have you tried to change the Storage Class to Standard and try the same?

@pgaddigopula
Sorry, but I don’t have permission to change or create the SC there is only one SC.

test@cloudlyte:/mnt/home/test$ kubectl get sc
NAME                        PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block (default)   rook-ceph.rbd.csi.ceph.com   Delete          Immediate              true                   36d

Here is the Operator logs the error i am getting –
kubectl logs sdb-operator-6c5f67f5b6-9rf5d -n singlestore

2024-08-09T05:54:42.099Z        INFO    cmd     operator/info.go:9      Go Version: go1.21.7
2024-08-09T05:54:42.099Z        INFO    cmd     operator/info.go:10     Go OS/Arch: linux/amd64
2024-08-09T05:54:42.099Z        INFO    cmd     operator/info.go:11     Operator Version: 3.258.0
2024-08-09T05:54:42.099Z        INFO    cmd     operator/info.go:12     Commit Hash: f5ba0d6a
2024-08-09T05:54:42.099Z        INFO    cmd     operator/info.go:13     Build Time: 2024-04-03T22:13:32Z
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:158    Options:
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:159    --cores-per-unit: 8.000000
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:160    --memory-per-unit: 32.000000
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:161    --overpack-factor: 0.000000
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:162    --extra-cidrs: []
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:163    --external-dns-domain-name: {false }
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:164    --external-dns-additional-domain-name: {false }
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:165    --external-dns-ttl: {false 0}
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:166    --ssl-secret-name:
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:167    --merge-service-annotations: true
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:214    --backup-internal-ssl: true
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:218    --cluster-id: sdb-cluster
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:222    --fs-group-id: 5555
2024-08-09T05:54:42.099Z        INFO    cmd     operator/args.go:242    --master-exporter-parameters: --no-cluster-collect.info_schema.tables --no-cluster-collect.info_schema.tablestats --no-collect.info_schema.tables --no-collect.info_schema.tablestats
2024-08-09T05:54:42.100Z        INFO    leader  leader/leader.go:89     Trying to become the leader.
2024-08-09T05:54:42.100Z        INFO    cmd     operator/util.go:56     Starting pprof server on port 6060.
2024-08-09T05:54:42.115Z        INFO    leader  leader/leader.go:129    No pre-existing lock was found.
2024-08-09T05:54:42.119Z        INFO    leader  leader/leader.go:149    Became the leader.
2024-08-09T05:54:42.141Z        INFO    cloud/awsclient.go:66   eks.amazonaws.com/role-arn annotation not found on operator service account    {"service account name": "sdb-operator"}
2024-08-09T05:54:42.141Z        INFO    cloud/awsclient.go:74   can not initialize AWS client
2024-08-09T05:54:42.141Z        INFO    operator/main.go:259    starting manager
2024-08-09T05:54:42.141Z        INFO    controller-runtime.metrics      server/server.go:185    Starting metrics server
2024-08-09T05:54:42.141Z        INFO    manager/server.go:50    starting server {"kind": "health probe", "addr": "[::]:8080"}
2024-08-09T05:54:42.141Z        INFO    controller-runtime.metrics      server/server.go:224    Serving metrics server  {"bindAddress": "0.0.0.0:9090", "secure": false}
2024-08-09T05:54:42.142Z        INFO    controller/controller.go:178    Starting EventSource    {"controller": "memsql", "source": "kind source: *v1alpha1.MemsqlCluster"}
2024-08-09T05:54:42.142Z        INFO    controller/controller.go:178    Starting EventSource    {"controller": "memsql", "source": "kind source: *v1.StatefulSet"}
2024-08-09T05:54:42.142Z        INFO    controller/controller.go:178    Starting EventSource    {"controller": "memsql", "source": "kind source: *v1.Service"}
2024-08-09T05:54:42.142Z        INFO    controller/controller.go:178    Starting EventSource    {"controller": "memsql", "source": "kind source: *v1.Secret"}
2024-08-09T05:54:42.142Z        INFO    controller/controller.go:186    Starting Controller     {"controller": "memsql"}
2024-08-09T05:54:42.248Z        INFO    controller/controller.go:220    Starting workers        {"controller": "memsql", "worker count": 1}
2024-08-09T05:54:49.769Z        INFO    controller/configmaps_secrets.go:59     reconciliation cause: memsqlcluster     {"name": "sdb-cluster", "namespace": "singlestore"}
2024-08-09T05:54:49.769Z        INFO    controller/controller.go:180    Reconciling MemSQL Cluster.     {"Request.Namespace": "singlestore", "Request.Name": "sdb-cluster"}
2024-08-09T05:54:49.870Z        INFO    controller/controller.go:499    Spec versioning config map doesn't exist. Creating a new one
2024-08-09T05:54:49.877Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.ConfigMap", "name": "sdb-cluster-spec-tracker"}
2024-08-09T05:54:49.985Z        INFO    controller/controller.go:275    Kubernetes version      {"value": "1.29.6+k3s1"}
2024-08-09T05:54:50.099Z        INFO    client/sync_cache_client.go:138 updating object {"type": "*v1.Deployment", "name": "sdb-operator"}
2024-08-09T05:54:50.207Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.Service", "name": "sdb-operator"}
2024-08-09T05:54:50.217Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: serviceName       {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "sdb-operator", "namespace": "singlestore"}
2024-08-09T05:54:50.318Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.Secret", "name": "sdb-cluster"}
2024-08-09T05:54:50.424Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.Service", "name": "svc-sdb-cluster"}
2024-08-09T05:54:50.431Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: serviceName       {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "svc-sdb-cluster", "namespace": "singlestore"}
2024-08-09T05:54:50.531Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.Service", "name": "svc-sdb-cluster-ddl"}
2024-08-09T05:54:50.545Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: serviceName       {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "svc-sdb-cluster-ddl", "namespace": "singlestore"}
2024-08-09T05:54:50.646Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.Service", "name": "svc-sdb-cluster-dml"}
2024-08-09T05:54:50.660Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: serviceName       {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "svc-sdb-cluster-dml", "namespace": "singlestore"}
2024-08-09T05:54:50.966Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.PodDisruptionBudget", "name": "agg-sdb-cluster"}
2024-08-09T05:54:51.074Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.ConfigMap", "name": "node-sdb-cluster-leaf-ag1"}
2024-08-09T05:54:51.181Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.ConfigMap", "name": "node-sdb-cluster-aggregator"}
2024-08-09T05:54:51.288Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.ConfigMap", "name": "node-sdb-cluster-master"}
2024-08-09T05:54:51.394Z        INFO    memsql/nodes.go:154     Creating a New STS      {"name": "node-sdb-cluster-master"}
2024-08-09T05:54:51.394Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.StatefulSet", "name": "node-sdb-cluster-master"}
2024-08-09T05:54:51.403Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-master", "namespace": "singlestore"}
2024-08-09T05:54:51.449Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-master", "namespace": "singlestore"}
2024-08-09T05:54:51.449Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-master", "namespace": "singlestore"}
2024-08-09T05:54:51.482Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-master", "namespace": "singlestore"}
2024-08-09T05:54:51.482Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-master", "namespace": "singlestore"}
2024-08-09T05:54:51.603Z        INFO    memsql/env.go:181       Transition phase (target phase: Pending, current phase: ) on missing phase value
2024-08-09T05:54:51.603Z        INFO    controller/controller.go:424    Updating operator version       {"previous version": "", "new version": "3.258.0"}
2024-08-09T05:54:51.603Z        INFO    controller/controller.go:431    Updating observed generation    {"previous value": 0, "new value": 1}
2024-08-09T05:54:51.716Z        INFO    controller/errors.go:82 RetryError: will retry after 3s: RetryError: will retry after 3s: found (1) pods for statefulset (node-sdb-cluster-master) that have not been scheduled
2024-08-09T05:54:51.717Z        INFO    controller/configmaps_secrets.go:55     skipping reconcile request because cluster spec has not changed
2024-08-09T05:54:51.717Z        INFO    controller/configmaps_secrets.go:55     skipping reconcile request because cluster spec has not changed
2024-08-09T05:54:51.717Z        INFO    controller/controller.go:180    Reconciling MemSQL Cluster.     {"Request.Namespace": "singlestore", "Request.Name": "sdb-cluster"}
2024-08-09T05:54:51.738Z        INFO    controller/errors.go:82 RetryError: will retry after 3s: RetryError: will retry after 3s: found (1) pods for statefulset (node-sdb-cluster-master) that have not been scheduled
2024-08-09T05:54:54.717Z        INFO    controller/controller.go:180    Reconciling MemSQL Cluster.     {"Request.Namespace": "singlestore", "Request.Name": "sdb-cluster"}
2024-08-09T05:54:54.728Z        INFO    memsql/clustering.go:2790       statefulset node-sdb-cluster-master is not stable       {"replicas": 1, "ready replicas": 0}
2024-08-09T05:54:54.728Z        INFO    memsql/retry.go:39      WARN: Error     {"Cause": "Wait for MA connection", "Retry after": "5s"}
2024-08-09T05:54:54.729Z        INFO    memsql/nodes.go:154     Creating a New STS      {"name": "node-sdb-cluster-aggregator"}
2024-08-09T05:54:54.729Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.StatefulSet", "name": "node-sdb-cluster-aggregator"}
2024-08-09T05:54:54.737Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-aggregator", "namespace": "singlestore"}
2024-08-09T05:54:54.769Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-aggregator", "namespace": "singlestore"}
2024-08-09T05:54:54.769Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-aggregator", "namespace": "singlestore"}
2024-08-09T05:54:54.795Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-aggregator", "namespace": "singlestore"}
2024-08-09T05:54:54.795Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-aggregator", "namespace": "singlestore"}
2024-08-09T05:54:54.837Z        INFO    memsql/nodes.go:154     Creating a New STS      {"name": "node-sdb-cluster-leaf-ag1"}
2024-08-09T05:54:54.837Z        INFO    client/sync_cache_client.go:40  creating object {"type": "*v1.StatefulSet", "name": "node-sdb-cluster-leaf-ag1"}
2024-08-09T05:54:54.844Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-leaf-ag1", "namespace": "singlestore"}
2024-08-09T05:54:54.876Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-leaf-ag1", "namespace": "singlestore"}
2024-08-09T05:54:54.876Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-leaf-ag1", "namespace": "singlestore"}
2024-08-09T05:54:54.902Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-leaf-ag1", "namespace": "singlestore"}
2024-08-09T05:54:54.902Z        INFO    controller/configmaps_secrets.go:94     reconciliation cause: statefulsetName   {"namespace": "singlestore", "clusterName": "sdb-cluster", "name": "node-sdb-cluster-leaf-ag1", "namespace": "singlestore"}
2024-08-09T05:54:54.944Z        INFO    memsql/clustering.go:2790       statefulset node-sdb-cluster-leaf-ag1 is not stable     {"replicas": 1, "ready replicas": 0}
2024-08-09T05:54:54.964Z        ERROR   controller/errors.go:113        Reconciler error        {"will retry after": "1s", "error": ": failed to get information about the storage class rook-ceph-block: failed to get storageclass \"rook-ceph-block\": storageclasses.storage.k8s.io \"rook-ceph-block\" is forbidden: User \"system:serviceaccount:singlestore:sdb-operator\" cannot get resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope"}

github.com/memsql/errors.Wrapf
        github.com/memsql/errors@v0.0.0-20231018164637-775e723e46ab/errors.go:207
singlestore.com/helios/kube/memsql.LiveObjects.GetStorageClass
        singlestore.com/helios/kube/memsql/live_objects.go:143
singlestore.com/helios/kube/memsql.(*Env).GetStatefulSetDelta
        singlestore.com/helios/kube/memsql/clustering.go:3032
singlestore.com/helios/kube/memsql.(*Env).StatefulSetDeltasFromConfig
        singlestore.com/helios/kube/memsql/clustering.go:187
singlestore.com/helios/kube/memsql.(*Env).NewClusterAction.NewUpdateStsDeltasAction.func25
        singlestore.com/helios/kube/memsql/clustering.go:675
singlestore.com/helios/kube/memsql.(*Env).NewClusterAction.ComposeActions.func45
        singlestore.com/helios/kube/memsql/action.go:23
singlestore.com/helios/kube/controller.(*Reconciler).Reconcile
        singlestore.com/helios/kube/controller/controller.go:336
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
        sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:119
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
        sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
        sigs.k8s.io/controller-runtime@v0.17.0/pkg/internal/controller/controller.go:266

If its a VM/VM[S] then we can create a directory something like this

mkdir /mnt/data

Inside data directory create pv1,pv2,pv3

Create a storage class as mentioned above and create 2 pv’s one for MA, CA and LEAF with below notation.

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 50Gi
volumeMode: Filesystem
accessModes:

  • ReadWriteOnce
    persistentVolumeReclaimPolicy: Delete
    storageClassName: standard
    local:
    path: /mnt/data/pv1
    nodeAffinity:
    required:
    nodeSelectorTerms: