Deployment error on k8s

Hello Team,

There is an error when deploying on k8s. Please help me.
I followed the guide in the docs, but at the last step (kubectl create -f memsql-cluster.yaml) I get an error.

I tried the following to solve this problem, but all failed.

  • k8s version change (wait here: k8s v1.23.x (latest) doesn’t recognize “memsql-cluster-crd.yaml”).
  • kernel upgrade
  • resource change

It probably seems to be a cgroup related issue when nodes are deployed via memsql-operator.

For reference, system and S/W information, command execution result log, etc. are pasted at the end (it is tedious).

== To summarize ==

  • vCPU 40, RAM 192GB, free DISK : 170GB
  • CentOS 7.9 / kernel 3.10 /
  • S/W 정보
    minikube 1.25.1
    kubectl 1.20.15
    docker 20.10.11
    docker-image : memsql/operator:1.2.5-centos-83e8133a, memsql/node:centos-7.6.9-7d7e13942a

== Main Error log ==
Look at the output of the last two “kubectl describe …” commands

    Last State: Terminated
      Reason: ContainerCannotRun
      Message: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod2209b9ac-00b2-4e1d-a816-c258101d24c2/856bd2b8af6b3d548b8e8cca5ccbbf6d2f8315cc361d0a5d223c1c3929ea argument.
      Exit Code: 128

== all outputs ==

##################################################
$ cat /etc/centos-release
##################################################
CentOS Linux release 7.9.2009 (Core)


##################################################
$ uname -msr
##################################################
Linux 3.10.0-1160.25.1.el7.x86_64 x86_64


##################################################
$ lscpu
##################################################
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                40
On-line CPU(s) list:   0-39
Thread(s) per core:    2
Core(s) per socket:    10
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
Stepping:              4
CPU MHz:               1588.500
CPU max MHz:           3000.0000
CPU min MHz:           800.0000
BogoMIPS:              4400.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              14080K
NUMA node0 CPU(s):     0-9,20-29
NUMA node1 CPU(s):     10-19,30-39
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear spec_ctrl intel_stibp flush_l1d


##################################################
$ free
##################################################
              total        used        free      shared  buff/cache   available
Mem:      196523700    21616000    55516848     8106416   119390852   158594636
Swap:      16777212       10452    16766760


##################################################
$df -h
##################################################
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                  94G     0   94G   0% /dev
tmpfs                     94G  3.7G   91G   4% /dev/shm
tmpfs                     94G  4.0G   90G   5% /run
tmpfs                     94G     0   94G   0% /sys/fs/cgroup
/dev/mapper/centos-root  869G  692G  177G  80% /


##################################################
$ docker version
##################################################
Client: Docker Engine - Community
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.9
 Git commit:        dea9396
 Built:             Thu Nov 18 00:38:53 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.11
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.9
  Git commit:       847da18
  Built:            Thu Nov 18 00:37:17 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 nvidia:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0


##################################################
$ docker info
##################################################
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
  scan: Docker Scan (Docker Inc., v0.9.0)

Server:
 Containers: 4
  Running: 1
  Paused: 0
  Stopped: 3
 Images: 44
 Server Version: 20.10.11
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
 Default Runtime: nvidia
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-1160.25.1.el7.x86_64
 Operating System: CentOS Linux 7 (Core)
 OSType: linux
 Architecture: x86_64
 CPUs: 40
 Total Memory: 187.4GiB
 Name: gpuserver
 ID: UZU7:D7WJ:NVLC:5YT4:XCTM:QUEQ:7GGT:4D7I:4Z2K:C5NW:2Y4Q:Y3ET
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false


##################################################
$ docker images
##################################################
REPOSITORY                           TAG                       IMAGE ID       CREATED         SIZE
memsql/node                          centos-7.6.9-7d7e13942a   b7cddcba4f12   9 days ago      709MB
memsql/node                          latest                    b7cddcba4f12   9 days ago      709MB
memsql/tools                         latest                    fddef267b82f   13 days ago     813MB
gcr.io/k8s-minikube/kicbase          v0.0.29                   64d09634c60d   8 weeks ago     1.14GB
memsql/cluster-in-a-box              latest                    670f360e32fb   4 months ago    866MB
zookeeper                            3.5                       087a8c669e6a   5 months ago    270MB
memsql/operator                      1.2.5-centos-83e8133a     4009f39515a2   9 months ago    296MB
k8s.gcr.io/kube-proxy                v1.18.0                   43940c34f24f   23 months ago   117MB
k8s.gcr.io/kube-apiserver            v1.18.0                   74060cea7f70   23 months ago   173MB
k8s.gcr.io/kube-scheduler            v1.18.0                   a31f78c7c8ce   23 months ago   95.3MB
k8s.gcr.io/kube-controller-manager   v1.18.0                   d3e55153f52f   23 months ago   162MB
k8s.gcr.io/pause                     3.2                       80d28bedfe5d   2 years ago     683kB
k8s.gcr.io/coredns                   1.6.7                     67da37a9a360   2 years ago     43.8MB
k8s.gcr.io/etcd                      3.4.3-0                   303ce5db0e90   2 years ago     288MB


##################################################
$ kubectl version
##################################################
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:27:39Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.15", GitCommit:"8f1e5bf0b9729a899b8df86249b56e2c74aebc55", GitTreeState:"clean", BuildDate:"2022-01-19T17:23:01Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}


##################################################
$ minikube version
##################################################
minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d


##################################################
$ minikube start -p aged --kubernetes-version=v1.20.15
##################################################
* [aged] minikube v1.25.1 on Centos 7.9.2009
* Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
* Using the docker driver based on existing profile
* Starting control plane node aged in cluster aged
* Pulling base image ...
* Updating the running docker "aged" container ...
^C
[13:45:28 madamgold@gpuserver ~/k8s]$ ^C
[13:45:28 madamgold@gpuserver ~/k8s]$ minikube stop --all
* Stopping node "aged"  ...
* Powering off "aged" via SSH ...
* 1 node stopped.
[13:45:36 madamgold@gpuserver ~/k8s]$ minikube start -p aged --kubernetes-version=v1.20.15
* [aged] minikube v1.25.1 on Centos 7.9.2009
* Kubernetes 1.23.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.1
* Using the docker driver based on existing profile
* Starting control plane node aged in cluster aged
* Pulling base image ...
* Restarting existing docker container for "aged" ...
* Preparing Kubernetes v1.20.15 on Docker 20.10.12 ...
  - kubelet.housekeeping-interval=5m
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "aged" cluster and "default" namespace by default


##################################################
$ kubectl create -f rbac.yaml
##################################################
serviceaccount/memsql-operator created
role.rbac.authorization.k8s.io/memsql-operator created
rolebinding.rbac.authorization.k8s.io/memsql-operator created


##################################################
$ kubectl create -f memsql-cluster-crd.yaml
##################################################
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/memsqlclusters.memsql.com created


##################################################
$ cat deployment.yaml
##################################################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: memsql-operator
  labels:
    app.kubernetes.io/component: operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: memsql-operator
  template:
    metadata:
      labels:
        name: memsql-operator
    spec:
      serviceAccountName: memsql-operator
      containers:
        - name: memsql-operator
          image: "memsql/operator:1.2.5-centos-83e8133a"
          imagePullPolicy: Always
          args: [
            # Cause the operator to merge rather than replace annotations on services
            "--merge-service-annotations",
            # Allow the process inside the container to have read/write access to the `/var/lib/memsql` volume.
            "--fs-group-id", "5555"
          ]
          env:
            - name: WATCH_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: OPERATOR_NAME
              value: "memsql-operator"


##################################################
$ kubectl create -f deployment.yaml
##################################################
deployment.apps/memsql-operator created


##################################################
$ cat memsql-cluster.yaml
##################################################
apiVersion: memsql.com/v1alpha1
kind: MemsqlCluster
metadata:
  name: memsql-cluster
spec:
  license: BGQ1N2RhOWZmZjNkOTRjOTI5N2I0ZjJhNDljMTEyZWM1rJ23XgAAAAAAAAAAAAAAAAkwNAIYeRDIOgKxOXrq6/gKDpuVqlfM+v7ZTUAeAhgaeSLDCf8dloik8xmlg+KlnqVTL5GnNioAAA==
  adminHashedPassword: "*C5F8E7498F063FF46B3F2044677FF8F560F4081F"
  nodeImage:
    repository: memsql/node
    tag: centos-7.6.9-7d7e13942a

  redundancyLevel: 1

  serviceSpec:
    objectMetaOverrides:
      labels:
        custom: label
      annotations:
        custom: annotations

  aggregatorSpec:
    count: 1
    height: 0.5
    storageGB: 100
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label

  leafSpec:
    count: 1
    height: 0.5
    storageGB: 100
    storageClass: standard

    objectMetaOverrides:
      annotations:
        optional: annotation
      labels:
        optional: label


##################################################
$ kubectl create -f memsql-cluster.yaml
##################################################
memsqlcluster.memsql.com/memsql-cluster created


##################################################
$ kubectl get all
##################################################
NAME                                  READY   STATUS             RESTARTS   AGE
pod/memsql-operator-b5c646d84-dblrm   1/1     Running            0          6m28s
pod/node-memsql-cluster-leaf-ag1-0    1/2     CrashLoopBackOff   5          6m13s
pod/node-memsql-cluster-master-0      1/2     CrashLoopBackOff   5          6m13s

NAME                         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/kubernetes           ClusterIP   10.96.0.1    <none>        443/TCP    10m
service/svc-memsql-cluster   ClusterIP   None         <none>        3306/TCP   6m13s

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/memsql-operator   1/1     1            1           6m28s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/memsql-operator-b5c646d84   1         1         1       6m28s

NAME                                            READY   AGE
statefulset.apps/node-memsql-cluster-leaf-ag1   0/1     6m13s
statefulset.apps/node-memsql-cluster-master     0/1     6m13s


##################################################
$ kc get pods
##################################################
NAME                              READY   STATUS              RESTARTS   AGE
memsql-operator-b5c646d84-dblrm   1/1     Running             0          55s
node-memsql-cluster-leaf-ag1-0    1/2     RunContainerError   0          40s
node-memsql-cluster-master-0      1/2     RunContainerError   0          40s


##################################################
$ kubectl logs -f deployment.apps/memsql-operator
##################################################
2022/02/16 06:59:07 controller.go:183   {controller.memsql}     Reconciling MemSQL Cluster.     Request.Namespace: "default"  Request.Name: "memsql-cluster"
2022/02/16 06:59:07 nodes.go:107        {memsql}        Creating a New STS      name: "node-memsql-cluster-master"
2022/02/16 06:59:08 nodes.go:107        {memsql}        Creating a New STS      name: "node-memsql-cluster-leaf-ag1"
2022/02/16 06:59:08 connection.go:38    {memsql}        Connect to the Master Aggregator
2022/02/16 06:59:08 connection.go:41    {memsql}        Failed to connect to the Master Aggregator, will retry  error: "Pod "node-memsql-cluster-master-0" not found"
2022/02/16 06:59:08 errors.go:75        {controller.memsql}     Reconciler error        will retry after: "5s"
2022/02/16 06:59:08 controller.go:183   {controller.memsql}     Reconciling MemSQL Cluster.     Request.Namespace: "default"  Request.Name: "memsql-cluster"
2022/02/16 06:59:08 util.go:340 {memsql}        stageUpdates for service, service Annotations are different.    Staged: "map[]"  Live: "map[]"
2022/02/16 06:59:08 clustering.go:1064  {memsql}        info.Stable: false      info.Config.Name: "node-memsql-cluster-master"  live.Status.Replicas: "1"  live.Status.ReadyReplicas: "0"  live.Status.UpdatedReplicas: "1"  info.NumLiveNodes: "1"  live.Status.UpdateRevision: "node-memsql-cluster-master-b4b87c8c5"  live.ObjectMeta.Labels[LabelKeyDirtySpecRevision]: ""
2022/02/16 06:59:08 clustering.go:1064  {memsql}        info.Stable: false      info.Config.Name: "node-memsql-cluster-leaf-ag1"  live.Status.Replicas: "0"  live.Status.ReadyReplicas: "0"  live.Status.UpdatedReplicas: "0"  info.NumLiveNodes: "1"  live.Status.UpdateRevision: ""  live.ObjectMeta.Labels[LabelKeyDirtySpecRevision]: ""
2022/02/16 06:59:08 connection.go:38    {memsql}        Connect to the Master Aggregator
2022/02/16 06:59:08 connection.go:41    {memsql}        Failed to connect to the Master Aggregator, will retry  error: "dial tcp :3306: connect: connection refused"
2022/02/16 06:59:08 errors.go:75        {controller.memsql}     Reconciler error        will retry after: "5s"
2022/02/16 06:59:08 controller.go:183   {controller.memsql}     Reconciling MemSQL Cluster.     Request.Namespace: "default"  Request.Name: "memsql-cluster"
2022/02/16 06:59:08 util.go:340 {memsql}        stageUpdates for service, service Annotations are different.    Staged: "map[]"  Live: "map[]"
2022/02/16 06:59:08 clustering.go:1064  {memsql}        info.Stable: false      info.Config.Name: "node-memsql-cluster-master"  live.Status.Replicas: "1"  live.Status.ReadyReplicas: "0"  live.Status.UpdatedReplicas: "1"  info.NumLiveNodes: "1"  live.Status.UpdateRevision: "node-memsql-cluster-master-b4b87c8c5"  live.ObjectMeta.Labels[LabelKeyDirtySpecRevision]: ""
2022/02/16 06:59:08 clustering.go:1064  {memsql}        info.Stable: false      info.Config.Name: "node-memsql-cluster-leaf-ag1"  live.Status.Replicas: "0"  live.Status.ReadyReplicas: "0"  live.Status.UpdatedReplicas: "0"  info.NumLiveNodes: "1"  live.Status.UpdateRevision: ""  live.ObjectMeta.Labels[LabelKeyDirtySpecRevision]: ""
2022/02/16 06:59:08 connection.go:38    {memsql}        Connect to the Master Aggregator
2022/02/16 06:59:08 connection.go:41    {memsql}        Failed to connect to the Master Aggregator, will retry  error: "dial tcp :3306: connect: connection refused"
2022/02/16 06:59:08 errors.go:75        {controller.memsql}     Reconciler error        will retry after: "5s"


##################################################
$ kubectl describe pod/node-memsql-cluster-master-0
##################################################
Name:         node-memsql-cluster-master-0
Namespace:    default
Priority:     0
Node:         aged/192.168.49.2
Start Time:   Wed, 16 Feb 2022 15:41:31 +0900
Labels:       app.kubernetes.io/component=master
              app.kubernetes.io/instance=memsql-cluster
              app.kubernetes.io/name=memsql-cluster
              controller-revision-hash=node-memsql-cluster-master-dfc5998bf
              memsql.com/role-tier=aggregator
              optional=label
              statefulset.kubernetes.io/pod-name=node-memsql-cluster-master-0
Annotations:  hash.configmap.memsql.com/node-memsql-cluster-master: 00d38ae9fb190a5e1418ab7373a591a8
              hash.secret.memsql.com/memsql-cluster: 76c2a2721784b11c8e248ebd1335600e
              optional: annotation
              prometheus.io/port: 9104
              prometheus.io/scrape: true
Status:       Running
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  StatefulSet/node-memsql-cluster-master
Containers:
  node:
    Container ID:   docker://856bd2b8af6b3d548b8e8cca5ccbbf6d2f8315cc361d0a5d223c1c3929ea4c2c
    Image:          memsql/node:centos-7.6.9-7d7e13942a
    Image ID:       docker-pullable://memsql/node@sha256:3f17312114d12829f1f798782b0f99b9fbdc7853f6e7353929b14faf096cbf82
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod2209b9ac-00b2-4e1d-a816-c258101d24c2/856bd2b8af6b3d548b8e8cca5ccbbf6d2f8315cc361d0a5d223c1c3929ea4c2c/cpu.cfs_quota_us: invalid argument: unknown
      Exit Code:    128
      Started:      Wed, 16 Feb 2022 15:41:50 +0900
      Finished:     Wed, 16 Feb 2022 15:41:50 +0900
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  16Gi
    Requests:
      cpu:      4
      memory:   16Gi
    Liveness:   exec [/etc/memsql/scripts/liveness-probe] delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:  exec [/etc/memsql/scripts/readiness-probe] delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      RELEASE_ID:
      ROOT_PASSWORD:     <set to the key 'ROOT_PASSWORD' in secret 'memsql-cluster'>  Optional: false
      PRE_START_SCRIPT:  /etc/memsql/scripts/update-config-script
      MAXIMUM_MEMORY:    13107
      MALLOC_ARENA_MAX:  4
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wtl5n (ro)
  exporter:
    Container ID:  docker://d1821a3daee882ef18765d1dc15374cfdedaaea068d18db3a93a756c793fbe46
    Image:         memsql/node:centos-7.6.9-7d7e13942a
    Image ID:      docker-pullable://memsql/node@sha256:3f17312114d12829f1f798782b0f99b9fbdc7853f6e7353929b14faf096cbf82
    Port:          9104/TCP
    Host Port:     0/TCP
    Command:
      /etc/memsql/scripts/exporter-startup-script
    State:          Running
      Started:      Wed, 16 Feb 2022 15:42:01 +0900
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     100m
      memory:  90Mi
    Environment:
      RELEASE_ID:
      DATA_SOURCE_NAME:  <set to the key 'DATA_SOURCE_NAME' in secret 'memsql-cluster'>  Optional: false
    Mounts:
      /etc/memsql/extra from additional-files (rw)
      /etc/memsql/extra-secret from additional-secrets (rw)
      /etc/memsql/scripts from scripts (rw)
      /etc/memsql/share from global-additional-files (rw)
      /var/lib/memsql from pv-storage (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wtl5n (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  pv-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pv-storage-node-memsql-cluster-master-0
    ReadOnly:   false
  scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      node-memsql-cluster-master
    Optional:  false
  additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      memsql-cluster-additional-files
    Optional:  true
  additional-secrets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  memsql-cluster-additional-secrets
    Optional:    true
  global-additional-files:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      global-additional-files
    Optional:  true
  default-token-wtl5n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wtl5n
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  48s (x2 over 48s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         46s                default-scheduler  Successfully assigned default/node-memsql-cluster-master-0 to aged
  Normal   Pulling           45s                kubelet            Pulling image "memsql/node:centos-7.6.9-7d7e13942a"
  Normal   Pulled            27s                kubelet            Successfully pulled image "memsql/node:centos-7.6.9-7d7e13942a" in 17.993174837s
  Normal   Created           23s                kubelet            Created container node
  Warning  Failed            21s                kubelet            Error: failed to start container "node": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod2209b9ac-00b2-4e1d-a816-c258101d24c2/node/cpu.cfs_quota_us: invalid argument: unknown
  Normal   Pulled            21s                kubelet            Container image "memsql/node:centos-7.6.9-7d7e13942a" already present on machine
  Normal   Created           17s                kubelet            Created container exporter
  Normal   Started           15s                kubelet            Started container exporter
  Normal   Pulled            13s                kubelet            Container image "memsql/node:centos-7.6.9-7d7e13942a" already present on machine


##################################################
$ kubectl describe pod/node-memsql-cluster-leaf-ag1-0
##################################################
Name:         node-memsql-cluster-leaf-ag1-0
Namespace:    default

(syncopation)

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  58s (x2 over 58s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         56s                default-scheduler  Successfully assigned default/node-memsql-cluster-leaf-ag1-0 to aged
  Normal   Pulling           54s                kubelet            Pulling image "memsql/node:centos-7.6.9-7d7e13942a"
  Normal   Pulled            34s                kubelet            Successfully pulled image "memsql/node:centos-7.6.9-7d7e13942a" in 20.507794839s
  Normal   Pulled            29s                kubelet            Container image "memsql/node:centos-7.6.9-7d7e13942a" already present on machine
  Normal   Created           26s                kubelet            Created container exporter
  Normal   Started           24s                kubelet            Started container exporter
  Normal   Created           9s (x2 over 31s)   kubelet            Created container node
  Warning  Failed            6s (x2 over 29s)   kubelet            Error: failed to start container "node": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: process_linux.go:508: setting cgroup config for procHooks process caused: failed to write "400000": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/podd6351961-b95a-4503-83ac-b97c988fae88/node/cpu.cfs_quota_us: invalid argument: unknown
  Normal   Pulled            4s (x2 over 23s)   kubelet            Container image "memsql/node:centos-7.6.9-7d7e13942a" already present on machine

Best, regards.

Hello,

What version of cgroups are you using on the system? - is it cgroups2?

Marek

Hi,

Thanks for your reply!!

$ docker info | grep -i cgroup
Cgroup Driver: cgroupfs
Cgroup Version: 1

Hi,

This is because minikube has a minimum number of 2 CPUs, and also has a maximum (default) 2 CPUs.

I solved it by parameter for the maximum CPU limit as follows.

$ minikube start –cpus 16 -p aged --kubernetes-version=v1.20.15

I think it would be good to show information about this part in the docs guide.

Thank you for caring me.

1 Like