Kubernetes master doesn't attach FlexVolume

Refresh

April 2019

Views

79 time

3

I'm trying to attach the dummy-attachable FlexVolume sample for Kubernetes which seems to initialize normally according to my logs on both the nodes and master:

Loaded volume plugin "flexvolume-k8s/dummy-attachable

But when I try to attach the volume to a pod, the attach method never gets called from the master. The logs from the node read:

flexVolume driver k8s/dummy-attachable: using default GetVolumeName for volume dummy-attachable
operationExecutor.VerifyControllerAttachedVolume started for volume "dummy-attachable"
Operation for "\"flexvolume-k8s/dummy-attachable/dummy-attachable\"" failed. No retries permitted until 2019-04-22 13:42:51.21390334 +0000 UTC m=+4814.674525788 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"dummy-attachable\" (UniqueName: \"flexvolume-k8s/dummy-attachable/dummy-attachable\") pod \"nginx-dummy-attachable\"

Here's how I'm attempting to mount the volume:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-dummy-attachable
  namespace: default
spec:
  containers:
    - name: nginx-dummy-attachable
      image: nginx
      volumeMounts:
        - name: dummy-attachable
          mountPath: /data
      ports:
        - containerPort: 80
  volumes:
    - name: dummy-attachable
      flexVolume:
        driver: "k8s/dummy-attachable"

Here is the ouput of kubectl describe pod nginx-dummy-attachable:

Name:               nginx-dummy-attachable
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               [node id]
Start Time:         Wed, 24 Apr 2019 08:03:21 -0400
Labels:             <none>
Annotations:        kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container nginx-dummy-attachable
Status:             Pending
IP:                 
Containers:
  nginx-dummy-attachable:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /data from dummy-attachable (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hcnhj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  dummy-attachable:
    Type:       FlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)
    Driver:     k8s/dummy-attachable
    FSType:     
    SecretRef:  nil
    ReadOnly:   false
    Options:    map[]
  default-token-hcnhj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-hcnhj
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                From                                    Message
  ----     ------       ----               ----                                    -------
  Warning  FailedMount  41s (x6 over 11m)  kubelet, [node id]  Unable to mount volumes for pod "nginx-dummy-attachable_default([id])": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx-dummy-attachable". list of unmounted volumes=[dummy-attachable]. list of unattached volumes=[dummy-attachable default-token-hcnhj]

I added debug logging to the FlexVolume, so I was able to verify that the attach method was never called on the master node. I'm not sure what I'm missing here.

I don't know if this matters, but the cluster is being launched with KOPS. I've tried with both k8s 1.11 and 1.14 with no success.

0 answers