Kubernetes News

The Kubernetes project blog
  1. Authors: Jeffrey Sica (Red Hat), Amanda Katona (VMware)

    tl;dr Registration is open and the schedule is live so register now and we’ll see you in Amsterdam!

    Kubernetes Contributor Summit

    Sunday, March 29, 2020

    Monday, March 30, 2020

    Contributor Summit

    Hello everyone and Happy 2020! It’s hard to believe that KubeCon EU 2020 is less than six weeks away, and with that another contributor summit! This year we have the pleasure of being in Amsterdam in early spring, so be sure to pack some warmer clothing. This summit looks to be exciting with a lot of fantastic community-driven content. We received 26 submissions from the CFP. From that, the events team selected 12 sessions. Each of the sessions falls into one of four categories:

    • Community
    • Contributor Improvement
    • Sustainability
    • In-depth Technical

    On top of the presentations, there will be a dedicated Docs Sprint as well as the New Contributor Workshop 101 and 201 Sessions. All told, we will have five separate rooms of content throughout the day on Monday. Please see the full schedule to see what sessions you’d be interested in. We hope between the content provided and the inevitable hallway track, everyone has a fun and enriching experience.

    Speaking of fun, the social Sunday night should be a blast! We’re hosting this summit’s social close to the conference center, at ZuidPool. There will be games, bingo, and unconference sign-up throughout the evening. It should be a relaxed way to kick off the week.

    Registration is open! Space is limited so it’s always a good idea to register early.

    If you have any questions, reach out to the Amsterdam Team on Slack in the #contributor-summit channel.

    Hope to see you there!

  2. This document describes how to install a single control-plane Kubernetes cluster v1.15 with kubeadm on CentOS, and then deploy an external OpenStack cloud provider and Cinder CSI plugin to use Cinder volumes as persistent volumes in Kubernetes.

    Preparation in OpenStack

    This cluster runs on OpenStack VMs, so let’s create a few things in OpenStack first.

    • A project/tenant for this Kubernetes cluster
    • A user in this project for Kubernetes, to query node information and attach volumes etc
    • A private network and subnet
    • A router for this private network and connect it to a public network for floating IPs
    • A security group for all Kubernetes VMs
    • A VM as a control-plane node and a few VMs as worker nodes

    The security group will have the following rules to open ports for Kubernetes.

    Control-Plane Node

    Protocol Port Number Description
    TCP 6443 Kubernetes API Server
    TCP 2379-2380 etcd server client API
    TCP 10250 Kubelet API
    TCP 10251 kube-scheduler
    TCP 10252 kube-controller-manager
    TCP 10255 Read-only Kubelet API

    Worker Nodes

    Protocol Port Number Description
    TCP 10250 Kubelet API
    TCP 10255 Read-only Kubelet API
    TCP 30000-32767 NodePort Services

    CNI ports on both control-plane and worker nodes

    Protocol Port Number Description
    TCP 179 Calico BGP network
    TCP 9099 Calico felix (health check)
    UDP 8285 Flannel
    UDP 8472 Flannel
    TCP 6781-6784 Weave Net
    UDP 6783-6784 Weave Net

    CNI specific ports are only required to be opened when that particular CNI plugin is used. In this guide, we will use Weave Net. Only the Weave Net ports (TCP 6781-6784 and UDP 6783-6784), will need to be opened in the security group.

    The control-plane node needs at least 2 cores and 4GB RAM. After the VM is launched, verify its hostname and make sure it is the same as the node name in Nova. If the hostname is not resolvable, add it to /etc/hosts.

    For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to /etc/hosts and set hostname to master1.

    echo "192.168.1.4 master1" >> /etc/hosts
    hostnamectl set-hostname master1

    Install Docker and Kubernetes

    Next, we’ll follow the official documents to install docker and Kubernetes using kubeadm.

    Install Docker following the steps from the container runtime documentation.

    Note that it is a best practice to use systemd as the cgroup driver for Kubernetes. If you use an internal container registry, add them to the docker config.

    # Install Docker CE
    ## Set up the repository
    ### Install required packages.
    yum install yum-utils device-mapper-persistent-data lvm2
    ### Add Docker repository.
    yum-config-manager \
     --add-repo \
     https://download.docker.com/linux/centos/docker-ce.repo
    ## Install Docker CE.
    yum update && yum install docker-ce-18.06.2.ce
    ## Create /etc/docker directory.
    mkdir /etc/docker
    # Configure the Docker daemon
    cat > /etc/docker/daemon.json <<EOF
    {
     "exec-opts": ["native.cgroupdriver=systemd"],
     "log-driver": "json-file",
     "log-opts": {
     "max-size": "100m"
     },
     "storage-driver": "overlay2",
     "storage-opts": [
     "overlay2.override_kernel_check=true"
     ]
    }
    EOF
    mkdir -p /etc/systemd/system/docker.service.d
    # Restart Docker
    systemctl daemon-reload
    systemctl restart docker
    systemctl enable docker

    Install kubeadm following the steps from the Installing Kubeadm documentation.

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    # Set SELinux in permissive mode (effectively disabling it)
    # Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    systemctl enable --now kubelet
    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    # check if br_netfilter module is loaded
    lsmod | grep br_netfilter
    # if not, load it explicitly with
    modprobe br_netfilter

    The official document about how to create a single control-plane cluster can be found from the Creating a single control-plane cluster with kubeadm documentation.

    We’ll largely follow that document but also add additional things for the cloud provider. To make things more clear, we’ll use a kubeadm-config.yml for the control-plane node. In this config we specify to use an external OpenStack cloud provider, and where to find its config. We also enable storage API in API server’s runtime config so we can use OpenStack volumes as persistent volumes in Kubernetes.

    apiVersion:kubeadm.k8s.io/v1beta1
    kind:InitConfiguration
    nodeRegistration:
    kubeletExtraArgs:
    cloud-provider:"external"
    ---
    apiVersion:kubeadm.k8s.io/v1beta2
    kind:ClusterConfiguration
    kubernetesVersion:"v1.15.1"
    apiServer:
    extraArgs:
    enable-admission-plugins:NodeRestriction
    runtime-config:"storage.k8s.io/v1=true"
    controllerManager:
    extraArgs:
    external-cloud-volume-plugin:openstack
    extraVolumes:
    -name:"cloud-config"
    hostPath:"/etc/kubernetes/cloud-config"
    mountPath:"/etc/kubernetes/cloud-config"
    readOnly:true
    pathType:File
    networking:
    serviceSubnet:"10.96.0.0/12"
    podSubnet:"10.224.0.0/16"
    dnsDomain:"cluster.local"

    Now we’ll create the cloud config, /etc/kubernetes/cloud-config, for OpenStack. Note that the tenant here is the one we created for all Kubernetes VMs in the beginning. All VMs should be launched in this project/tenant. In addition you need to create a user in this tenant for Kubernetes to do queries. The ca-file is the CA root certificate for OpenStack’s API endpoint, for example https://openstack.cloud:5000/v3 At the time of writing the cloud provider doesn’t allow insecure connections (skip CA check).

    [Global]
    region=RegionOne
    username=username
    password=password
    auth-url=https://openstack.cloud:5000/v3
    tenant-id=14ba698c0aec4fd6b7dc8c310f664009
    domain-id=default
    ca-file=/etc/kubernetes/ca.pem
    [LoadBalancer]
    subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1
    floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5
    [BlockStorage]
    bs-version=v2
    [Networking]
    public-network-name=public
    ipv6-support-disabled=false

    Next run kubeadm to initiate the control-plane node

    kubeadm init --config=kubeadm-config.yml

    With the initialization completed, copy admin config to .kube

     mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    At this stage, the control-plane node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule and are waiting to be initialized by the cloud-controller-manager.

    # kubectl describe no master1
    Name: master1
    Roles: master
    ......
    Taints: node-role.kubernetes.io/master:NoSchedule
    node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
    node.kubernetes.io/not-ready:NoSchedule
    ......

    Now deploy the OpenStack cloud controller manager into the cluster, following using controller manager with kubeadm.

    Create a secret with the cloud-config for the openstack cloud provider.

    kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml
    kubectl apply -f cloud-config-secret.yaml

    Get the CA certificate for OpenStack API endpoints and put that into /etc/kubernetes/ca.pem.

    Create RBAC resources.

    kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml
    kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml

    We’ll run the OpenStack cloud controller manager as a DaemonSet rather than a pod. The manager will only run on the control-plane node, so if there are multiple control-plane nodes, multiple pods will be run for high availability. Create openstack-cloud-controller-manager-ds.yaml containing the following manifests, then apply it.

    ---
    apiVersion:v1
    kind:ServiceAccount
    metadata:
    name:cloud-controller-manager
    namespace:kube-system
    ---
    apiVersion:apps/v1
    kind:DaemonSet
    metadata:
    name:openstack-cloud-controller-manager
    namespace:kube-system
    labels:
    k8s-app:openstack-cloud-controller-manager
    spec:
    selector:
    matchLabels:
    k8s-app:openstack-cloud-controller-manager
    updateStrategy:
    type:RollingUpdate
    template:
    metadata:
    labels:
    k8s-app:openstack-cloud-controller-manager
    spec:
    nodeSelector:
    node-role.kubernetes.io/master:""
    securityContext:
    runAsUser:1001
    tolerations:
    -key:node.cloudprovider.kubernetes.io/uninitialized
    value:"true"
    effect:NoSchedule
    -key:node-role.kubernetes.io/master
    effect:NoSchedule
    -effect:NoSchedule
    key:node.kubernetes.io/not-ready
    serviceAccountName:cloud-controller-manager
    containers:
    -name:openstack-cloud-controller-manager
    image:docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0
    args:
    -/bin/openstack-cloud-controller-manager
    ---v=1
    ---cloud-config=$(CLOUD_CONFIG)
    ---cloud-provider=openstack
    ---use-service-account-credentials=true
    ---address=127.0.0.1
    volumeMounts:
    -mountPath:/etc/kubernetes/pki
    name:k8s-certs
    readOnly:true
    -mountPath:/etc/ssl/certs
    name:ca-certs
    readOnly:true
    -mountPath:/etc/config
    name:cloud-config-volume
    readOnly:true
    -mountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec
    name:flexvolume-dir
    -mountPath:/etc/kubernetes
    name:ca-cert
    readOnly:true
    resources:
    requests:
    cpu:200m
    env:
    -name:CLOUD_CONFIG
    value:/etc/config/cloud.conf
    hostNetwork:true
    volumes:
    -hostPath:
    path:/usr/libexec/kubernetes/kubelet-plugins/volume/exec
    type:DirectoryOrCreate
    name:flexvolume-dir
    -hostPath:
    path:/etc/kubernetes/pki
    type:DirectoryOrCreate
    name:k8s-certs
    -hostPath:
    path:/etc/ssl/certs
    type:DirectoryOrCreate
    name:ca-certs
    -name:cloud-config-volume
    secret:
    secretName:cloud-config
    -name:ca-cert
    secret:
    secretName:openstack-ca-cert

    When the controller manager is running, it will query OpenStack to get information about the nodes and remove the taint. In the node info you’ll see the VM’s UUID in OpenStack.

    # kubectl describe no master1
    Name: master1
    Roles: master
    ......
    Taints: node-role.kubernetes.io/master:NoSchedule
    node.kubernetes.io/not-ready:NoSchedule
    ......
    sage:docker: network plugin is not ready: cni config uninitialized
    ......
    PodCIDR: 10.224.0.0/24
    ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5

    Now install your favourite CNI and the control-plane node will become ready.

    For example, to install Weave Net, run this command:

    kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

    Next we’ll set up worker nodes.

    Firstly, install docker and kubeadm in the same way as how they were installed in the control-plane node. To join them to the cluster we need a token and ca cert hash from the output of control-plane node installation. If it is expired or lost we can recreate it using these commands.

    # check if token is expired
    kubeadm token list
    # re-create token and show join command
    kubeadm token create --print-join-command

    Create kubeadm-config.yml for worker nodes with the above token and ca cert hash.

    apiVersion:kubeadm.k8s.io/v1beta2
    discovery:
    bootstrapToken:
    apiServerEndpoint:192.168.1.7:6443
    token:0c0z4p.dnafh6vnmouus569
    caCertHashes:["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"]
    kind:JoinConfiguration
    nodeRegistration:
    kubeletExtraArgs:
    cloud-provider:"external"

    apiServerEndpoint is the control-plane node, token and caCertHashes can be taken from the join command printed in the output of ‘kubeadm token create’ command.

    Run kubeadm and the worker nodes will be joined to the cluster.

    kubeadm join --config kubeadm-config.yml

    At this stage we’ll have a working Kubernetes cluster with an external OpenStack cloud provider. The provider tells Kubernetes about the mapping between Kubernetes nodes and OpenStack VMs. If Kubernetes wants to attach a persistent volume to a pod, it can find out which OpenStack VM the pod is running on from the mapping, and attach the underlying OpenStack volume to the VM accordingly.

    Deploy Cinder CSI

    The integration with Cinder is provided by an external Cinder CSI plugin, as described in the Cinder CSI documentation.

    We’ll perform the following steps to install the Cinder CSI plugin. Firstly, create a secret with CA certs for OpenStack’s API endpoints. It is the same cert file as what we use in cloud provider above.

    kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml
    kubectl apply -f openstack-ca-cert.yaml

    Then create RBAC resources.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml
    kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml

    The Cinder CSI plugin includes a controller plugin and a node plugin. The controller communicates with Kubernetes APIs and Cinder APIs to create/attach/detach/delete Cinder volumes. The node plugin in-turn runs on each worker node to bind a storage device (attached volume) to a pod, and unbind it during deletion. Create cinder-csi-controllerplugin.yaml and apply it to create csi controller.

    kind:Service
    apiVersion:v1
    metadata:
    name:csi-cinder-controller-service
    namespace:kube-system
    labels:
    app:csi-cinder-controllerplugin
    spec:
    selector:
    app:csi-cinder-controllerplugin
    ports:
    -name:dummy
    port:12345
    
    ---
    kind:StatefulSet
    apiVersion:apps/v1
    metadata:
    name:csi-cinder-controllerplugin
    namespace:kube-system
    spec:
    serviceName:"csi-cinder-controller-service"
    replicas:1
    selector:
    matchLabels:
    app:csi-cinder-controllerplugin
    template:
    metadata:
    labels:
    app:csi-cinder-controllerplugin
    spec:
    serviceAccount:csi-cinder-controller-sa
    containers:
    -name:csi-attacher
    image:quay.io/k8scsi/csi-attacher:v1.0.1
    args:
    -"--v=5"
    -"--csi-address=$(ADDRESS)"
    env:
    -name:ADDRESS
    value:/var/lib/csi/sockets/pluginproxy/csi.sock
    imagePullPolicy:"IfNotPresent"
    volumeMounts:
    -name:socket-dir
    mountPath:/var/lib/csi/sockets/pluginproxy/
    -name:csi-provisioner
    image:quay.io/k8scsi/csi-provisioner:v1.0.1
    args:
    -"--provisioner=csi-cinderplugin"
    -"--csi-address=$(ADDRESS)"
    env:
    -name:ADDRESS
    value:/var/lib/csi/sockets/pluginproxy/csi.sock
    imagePullPolicy:"IfNotPresent"
    volumeMounts:
    -name:socket-dir
    mountPath:/var/lib/csi/sockets/pluginproxy/
    -name:csi-snapshotter
    image:quay.io/k8scsi/csi-snapshotter:v1.0.1
    args:
    -"--connection-timeout=15s"
    -"--csi-address=$(ADDRESS)"
    env:
    -name:ADDRESS
    value:/var/lib/csi/sockets/pluginproxy/csi.sock
    imagePullPolicy:Always
    volumeMounts:
    -mountPath:/var/lib/csi/sockets/pluginproxy/
    name:socket-dir
    -name:cinder-csi-plugin
    image:docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
    args:
    -/bin/cinder-csi-plugin
    -"--v=5"
    -"--nodeid=$(NODE_ID)"
    -"--endpoint=$(CSI_ENDPOINT)"
    -"--cloud-config=$(CLOUD_CONFIG)"
    -"--cluster=$(CLUSTER_NAME)"
    env:
    -name:NODE_ID
    valueFrom:
    fieldRef:
    fieldPath:spec.nodeName
    -name:CSI_ENDPOINT
    value:unix://csi/csi.sock
    -name:CLOUD_CONFIG
    value:/etc/config/cloud.conf
    -name:CLUSTER_NAME
    value:kubernetes
    imagePullPolicy:"IfNotPresent"
    volumeMounts:
    -name:socket-dir
    mountPath:/csi
    -name:secret-cinderplugin
    mountPath:/etc/config
    readOnly:true
    -mountPath:/etc/kubernetes
    name:ca-cert
    readOnly:true
    volumes:
    -name:socket-dir
    hostPath:
    path:/var/lib/csi/sockets/pluginproxy/
    type:DirectoryOrCreate
    -name:secret-cinderplugin
    secret:
    secretName:cloud-config
    -name:ca-cert
    secret:
    secretName:openstack-ca-cert

    Create cinder-csi-nodeplugin.yaml and apply it to create csi node.

    kind:DaemonSet
    apiVersion:apps/v1
    metadata:
    name:csi-cinder-nodeplugin
    namespace:kube-system
    spec:
    selector:
    matchLabels:
    app:csi-cinder-nodeplugin
    template:
    metadata:
    labels:
    app:csi-cinder-nodeplugin
    spec:
    serviceAccount:csi-cinder-node-sa
    hostNetwork:true
    containers:
    -name:node-driver-registrar
    image:quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
    args:
    -"--v=5"
    -"--csi-address=$(ADDRESS)"
    -"--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
    lifecycle:
    preStop:
    exec:
    command:["/bin/sh","-c","rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"]
    env:
    -name:ADDRESS
    value:/csi/csi.sock
    -name:DRIVER_REG_SOCK_PATH
    value:/var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock
    -name:KUBE_NODE_NAME
    valueFrom:
    fieldRef:
    fieldPath:spec.nodeName
    imagePullPolicy:"IfNotPresent"
    volumeMounts:
    -name:socket-dir
    mountPath:/csi
    -name:registration-dir
    mountPath:/registration
    -name:cinder-csi-plugin
    securityContext:
    privileged:true
    capabilities:
    add:["SYS_ADMIN"]
    allowPrivilegeEscalation:true
    image:docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
    args:
    -/bin/cinder-csi-plugin
    -"--nodeid=$(NODE_ID)"
    -"--endpoint=$(CSI_ENDPOINT)"
    -"--cloud-config=$(CLOUD_CONFIG)"
    env:
    -name:NODE_ID
    valueFrom:
    fieldRef:
    fieldPath:spec.nodeName
    -name:CSI_ENDPOINT
    value:unix://csi/csi.sock
    -name:CLOUD_CONFIG
    value:/etc/config/cloud.conf
    imagePullPolicy:"IfNotPresent"
    volumeMounts:
    -name:socket-dir
    mountPath:/csi
    -name:pods-mount-dir
    mountPath:/var/lib/kubelet/pods
    mountPropagation:"Bidirectional"
    -name:kubelet-dir
    mountPath:/var/lib/kubelet
    mountPropagation:"Bidirectional"
    -name:pods-cloud-data
    mountPath:/var/lib/cloud/data
    readOnly:true
    -name:pods-probe-dir
    mountPath:/dev
    mountPropagation:"HostToContainer"
    -name:secret-cinderplugin
    mountPath:/etc/config
    readOnly:true
    -mountPath:/etc/kubernetes
    name:ca-cert
    readOnly:true
    volumes:
    -name:socket-dir
    hostPath:
    path:/var/lib/kubelet/plugins/cinder.csi.openstack.org
    type:DirectoryOrCreate
    -name:registration-dir
    hostPath:
    path:/var/lib/kubelet/plugins_registry/
    type:Directory
    -name:kubelet-dir
    hostPath:
    path:/var/lib/kubelet
    type:Directory
    -name:pods-mount-dir
    hostPath:
    path:/var/lib/kubelet/pods
    type:Directory
    -name:pods-cloud-data
    hostPath:
    path:/var/lib/cloud/data
    type:Directory
    -name:pods-probe-dir
    hostPath:
    path:/dev
    type:Directory
    -name:secret-cinderplugin
    secret:
    secretName:cloud-config
    -name:ca-cert
    secret:
    secretName:openstack-ca-cert

    When they are both running, create a storage class for Cinder.

    apiVersion:storage.k8s.io/v1
    kind:StorageClass
    metadata:
    name:csi-sc-cinderplugin
    provisioner:csi-cinderplugin

    Then we can create a PVC with this class.

    apiVersion:v1
    kind:PersistentVolumeClaim
    metadata:
    name:myvol
    spec:
    accessModes:
    -ReadWriteOnce
    resources:
    requests:
    storage:1Gi
    storageClassName:csi-sc-cinderplugin

    When the PVC is created, a Cinder volume is created correspondingly.

    # kubectl get pvc
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s

    In OpenStack the volume name will match the Kubernetes persistent volume generated name. In this example it would be: pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad

    Now we can create a pod with the PVC.

    apiVersion:v1
    kind:Pod
    metadata:
    name:web
    spec:
    containers:
    -name:web
    image:nginx
    ports:
    -name:web
    containerPort:80
    hostPort:8081
    protocol:TCP
    volumeMounts:
    -mountPath:"/usr/share/nginx/html"
    name:mypd
    volumes:
    -name:mypd
    persistentVolumeClaim:
    claimName:myvol

    When the pod is running, the volume will be attached to the pod. If we go back to OpenStack, we can see the Cinder volume is mounted to the worker node where the pod is running on.

    # openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f
    +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Field | Value |
    +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] |
    | availability_zone | nova |
    | bootable | false |
    | consistencygroup_id | None |
    | created_at | 2019-07-24T05:02:18.000000 |
    | description | Created by OpenStack Cinder CSI driver |
    | encrypted | False |
    | id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f |
    | migration_status | None |
    | multiattach | False |
    | name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad |
    | os-vol-host-attr:host | rbd:volumes@rbd#rbd |
    | os-vol-mig-status-attr:migstat | None |
    | os-vol-mig-status-attr:name_id | None |
    | os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 |
    | properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' |
    | replication_status | None |
    | size | 1 |
    | snapshot_id | None |
    | source_volid | None |
    | status | in-use |
    | type | rbd |
    | updated_at | 2019-07-24T05:02:35.000000 |
    | user_id | 5f6a7a06f4e3456c890130d56babf591 |
    +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

    Summary

    In this walk-through, we deployed a Kubernetes cluster on OpenStack VMs and integrated it with OpenStack using an external OpenStack cloud provider. Then on this Kubernetes cluster we deployed Cinder CSI plugin which can create Cinder volumes and expose them in Kubernetes as persistent volumes.

  3. Authors Eugenio Marzo, Sourcesense

    Some months ago, I released my latest project called KubeInvaders. The first time I shared it with the community was during an Openshift Commons Briefing session. Kubenvaders is a Gamified Chaos Engineering tool for Kubernetes and Openshift and helps test how resilient your Kubernetes cluster is, in a fun way.

    It is like Space Invaders, but the aliens are pods.

    During my presentation at Codemotion Milan 2019, I started saying “of course you can do it with few lines of Bash, but it is boring.”

    Using the code above you can kill random pods across a Kubernetes cluster, but I think it is much more fun with the spaceship of KubeInvaders.

    I published the code at https://github.com/lucky-sideburn/KubeInvaders and there is a little community that is growing gradually. Some people love to use it for demo sessions killing pods on a big screen.

    How to install KubeInvaders

    I defined multiples modes to install it:

    1. Helm Chart https://github.com/lucky-sideburn/KubeInvaders/tree/master/helm-charts/kubeinvaders

    2. Manual Installation for Openshift using a template https://github.com/lucky-sideburn/KubeInvaders#install-kubeinvaders-on-openshift

    3. Manual Installation for Kubernetes https://github.com/lucky-sideburn/KubeInvaders#install-kubeinvaders-on-kubernetes

    The preferred way, of course, is with a Helm chart:

     # Please set target_namespace to set your target namespace!
    helm install --set-string target_namespace="namespace1,namespace2" \
    --name kubeinvaders --namespace kubeinvaders ./helm-charts/kubeinvaders
    

    How to use KubeInvaders

    Once it is installed on your cluster you can use the following functionalities:

    • Key ‘a’ — Switch to automatic pilot
    • Key ’m’ — Switch to manual pilot
    • Key ‘i’ — Show pod’s name. Move the ship towards an alien
    • Key ‘h’ — Print help
    • Key ‘n’ — Jump between different namespaces (my favorite feature!)

    Tuning KubeInvaders

    At Codemotion Milan 2019, my colleagues and I organized a desk with a game station for playing KubeInvaders. People had to fight with Kubernetes to win a t-shirt.

    If you have pods that require a few seconds to start, you may lose. It is possible to set the complexity of the game with these parameters as environmment variables in the Kubernetes deployment:

    • ALIENPROXIMITY — Reduce this value to increase the distance between aliens;
    • HITSLIMIT — Seconds of CPU time to wait before shooting;
    • UPDATETIME — Seconds to wait before updating pod status (you can set also 0.x Es: 0.5);

    The result is a harder game experience against the machine.

    Use cases

    Adopting chaos engineering strategies for your production environment is really useful, because it is the only way to test if a system supports unexpected destructive events.

    KubeInvaders is a game — so please do not take it too seriously! — but it demonstrates some important use cases:

    • Test how resilient Kubernetes clusters are on unexpected pod deletion
    • Collect metrics like pod restart time
    • Tune readiness probes

    Next steps

    I want to continue to add some cool features and integrate it into a Kubernetes dashboard because I am planning to transform it into a “Gamified Chaos Engineering and Development Tool for Kubernetes”, to help developer to interact with deployments in a Kubernetes environment. For example:

    • Point to the aliens to get pod logs
    • Deploy Helm charts by shooting some particular objects
    • Read messages stored in a specific label present in a deployment

    Please feel free to contribute to https://github.com/lucky-sideburn/KubeInvaders and stay updated following #kubeinvaders news on Twitter.

  4. Author: Patrick Ohly (Intel)

    Typically, volumes provided by an external storage driver in Kubernetes are persistent, with a lifecycle that is completely independent of pods or (as a special case) loosely coupled to the first pod which uses a volume (late binding mode). The mechanism for requesting and defining such volumes in Kubernetes are Persistent Volume Claim (PVC) and Persistent Volume (PV) objects. Originally, volumes that are backed by a Container Storage Interface (CSI) driver could only be used via this PVC/PV mechanism.

    But there are also use cases for data volumes whose content and lifecycle is tied to a pod. For example, a driver might populate a volume with dynamically created secrets that are specific to the application running in the pod. Such volumes need to be created together with a pod and can be deleted as part of pod termination (ephemeral). They get defined as part of the pod spec (inline).

    Since Kubernetes 1.15, CSI drivers can also be used for such ephemeral inline volumes. The CSIInlineVolume feature gate had to be set to enable it in 1.15 because support was still in alpha state. In 1.16, the feature reached beta state, which typically means that it is enabled in clusters by default.

    CSI drivers have to be adapted to support this because although two existing CSI gRPC calls are used (NodePublishVolume and NodeUnpublishVolume), the way how they are used is different and not covered by the CSI spec: for ephemeral volumes, only NodePublishVolume is invoked by kubelet when asking the CSI driver for a volume. All other calls (like CreateVolume, NodeStageVolume, etc.) are skipped. The volume parameters are provided in the pod spec and from there copied into the NodePublishVolumeRequest.volume_context field. There are currently no standardized parameters; even common ones like size must be provided in a format that is defined by the CSI driver. Likewise, only NodeUnpublishVolume gets called after the pod has terminated and the volume needs to be removed.

    Initially, the assumption was that CSI drivers would be specifically written to provide either persistent or ephemeral volumes. But there are also drivers which provide storage that is useful in both modes: for example, PMEM-CSI manages persistent memory (PMEM), a new kind of local storage that is provided by Intel® Optane™ DC Persistent Memory. Such memory is useful both as persistent data storage (faster than normal SSDs) and as ephemeral scratch space (higher capacity than DRAM).

    Therefore the support in Kubernetes 1.16 was extended: * Kubernetes and users can determine which kind of volumes a driver supports via the volumeLifecycleModes field in the CSIDriver object. * Drivers can get information about the volume mode by enabling the “pod info on mount” feature which then will add the new csi.storage.k8s.io/ephemeral entry to the NodePublishRequest.volume_context.

    For more information about implementing support of ephemeral inline volumes in a CSI driver, see the Kubernetes-CSI documentation and the original design document.

    What follows in this blog post are usage examples based on real drivers and a summary at the end.

    Examples

    PMEM-CSI

    Support for ephemeral inline volumes was added in release v0.6.0. The driver can be used on hosts with real Intel® Optane™ DC Persistent Memory, on special machines in GCE or with hardware emulated by QEMU. The latter is fully integrated into the makefile and only needs Go, Docker and KVM, so that approach was used for this example:

    git clone --branch release-0.6 https://github.com/intel/pmem-csi
    cd pmem-csi
    TEST_DISTRO=clear TEST_DISTRO_VERSION=32080 TEST_PMEM_REGISTRY=intel make start

    Bringing up the four-node cluster can take a while but eventually should end with:

    The test cluster is ready. Log in with /work/pmem-csi/_work/pmem-govm/ssh-pmem-govm, run kubectl once logged in.
    Alternatively, KUBECONFIG=/work/pmem-csi/_work/pmem-govm/kube.config can also be used directly.
    To try out the pmem-csi driver persistent volumes:
    ...
    To try out the pmem-csi driver ephemeral volumes:
    cat deploy/kubernetes-1.17/pmem-app-ephemeral.yaml | /work/pmem-csi/_work/pmem-govm/ssh-pmem-govm kubectl create -f -
    

    deploy/kubernetes-1.17/pmem-app-ephemeral.yaml specifies one volume:

    kind: Pod
    apiVersion: v1
    metadata:
    name: my-csi-app-inline-volume
    spec:
    containers:
    - name: my-frontend
    image: busybox
    command: [ "sleep", "100000" ]
    volumeMounts:
    - mountPath: "/data"
    name: my-csi-volume
    volumes:
    - name: my-csi-volume
    csi:
    driver: pmem-csi.intel.com
    fsType: "xfs"
    volumeAttributes:
    size: "2Gi"
    nsmode: "fsdax"
    

    Once we have created that pod, we can inspect the result:

    kubectl describe pods/my-csi-app-inline-volume
    Name: my-csi-app-inline-volume
    ...
    Volumes:
    my-csi-volume:
    Type: CSI (a Container Storage Interface (CSI) volume source)
    Driver: pmem-csi.intel.com
    FSType: xfs
    ReadOnly: false
    VolumeAttributes: nsmode=fsdax
    size=2Gi
    
    kubectl exec my-csi-app-inline-volume -- df -h /data
    Filesystem Size Used Available Use% Mounted on
    /dev/ndbus0region0fsdax/d7eb073f2ab1937b88531fce28e19aa385e93696
    1.9G 34.2M 1.8G 2% /data
    

    Image Populator

    The image populator automatically unpacks a container image and makes its content available as an ephemeral volume. It’s still in development, but canary images are already available which can be installed with:

    kubectl create -f https://github.com/kubernetes-csi/csi-driver-image-populator/raw/master/deploy/kubernetes-1.16/csi-image-csidriverinfo.yaml
    kubectl create -f https://github.com/kubernetes-csi/csi-driver-image-populator/raw/master/deploy/kubernetes-1.16/csi-image-daemonset.yaml

    This example pod will run nginx and have it serve data that comes from the kfox1111/misc:test image:

    kubectl create -f - <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
     name: nginx
    spec:
     containers:
     - name: nginx
     image: nginx:1.13-alpine
     ports:
     - containerPort: 80
     volumeMounts:
     - name: data
     mountPath: /usr/share/nginx/html
     volumes:
     - name: data
     csi:
     driver: image.csi.k8s.io
     volumeAttributes:
     image: kfox1111/misc:test
    EOF
    kubectl exec nginx -- cat /usr/share/nginx/html/test

    That test file just contains a single word:

    testing
    

    Such data containers can be built with Dockerfiles such as:

    FROM scratch
    COPY index.html /index.html
    

    cert-manager-csi

    cert-manager-csi works together with cert-manager. The goal for this driver is to facilitate requesting and mounting certificate key pairs to pods seamlessly. This is useful for facilitating mTLS, or otherwise securing connections of pods with guaranteed present certificates whilst having all of the features that cert-manager provides. This project is experimental.

    Next steps

    One of the issues with ephemeral inline volumes is that pods get scheduled by Kubernetes onto nodes without knowing anything about the currently available storage on that node. Once the pod has been scheduled, the CSI driver must make the volume available one that node. If that is currently not possible, the pod cannot start. This will be retried until eventually the volume becomes ready. The storage capacity tracking KEP is an attempt to address this problem.

    A related KEP introduces a standardized size parameter.

    Currently, CSI ephemeral inline volumes stay in beta while issues like these are getting discussed. Your feedback is needed to decide how to proceed with this feature. For the KEPs, the two PRs linked to above is a good place to comment. The SIG Storage also meets regularly and can be reached via Slack and a mailing list.

  5. Author: Zach Corleissen (Cloud Native Computing Foundation)

    Hi, folks! I’m one of the co-chairs for the Kubernetes documentation special interest group (SIG Docs). This blog post is a review of SIG Docs in 2019. Our contributors did amazing work last year, and I want to highlight their successes.

    Although I review 2019 in this post, my goal is to point forward to 2020. I observe some trends in SIG Docs–some good, others troubling. I want to raise visibility before those challenges increase in severity.

    The good

    There was much to celebrate in SIG Docs in 2019.

    Kubernetes docs started the year with three localizations in progress. By the end of the year, we ended with ten localizations available, four of which (Chinese, French, Japanese, Korean) are reasonably complete. The Korean and French teams deserve special mentions for their contributions to git best practices across all localizations (Korean team) and help bootstrapping other localizations (French team).

    Despite significant transition over the year, SIG Docs improved its review velocity, with a median review time from PR open to merge of just over 24 hours.

    Issue triage improved significantly in both volume and speed, largely due to the efforts of GitHub users @sftim, @tengqm, and @kbhawkey.

    Doc sprints remain valuable at KubeCon contributor days, introducing new contributors to Kubernetes documentation.

    The docs component of Kubernetes quarterly releases improved over 2019, thanks to iterative playbook improvements from release leads and their teams.

    Site traffic increased over the year. The website ended the year with ~6 million page views per month in December, up from ~5M page views in January. The kubernetes.io website had 851k site visitors in October, a new all-time high. Reader satisfaction remains general.

    We onboarded a new SIG chair: @jimangel, a Cloud Architect at General Motors. Jim was a docs contributor for a year, during which he led the 1.14 docs release, before stepping up as chair.

    The not so good

    While reader satisfaction is decent, most respondents indicated dissatisfaction with stale content in every area: concepts, tasks, tutorials, and reference. Additionally, readers requested more diagrams, advanced conceptual content, and code samples—things that technical writers excel at providing.

    SIG Docs continues to solve how best to handle third-party content. There’s too much vendor content on kubernetes.io, and guidelines for adding or rejecting third-party content remain unclear. The discussion so far has been powerful, including pushback demanding greater collaborative input—a powerful reminder that Kubernetes is in all ways a communal effort.

    We’re in the middle of our third chair transition in 18 months. Each chair transition has been healthy and collegial, but it’s still a lot of turnover in a short time. Chairing any open source project is difficult, but especially so with SIG Docs. Chairship of SIG Docs requires a steep learning curve across multiple domains: docs (both written and generated from spec), information architecture, specialized contribution paths (for example, localization), how to run a release cycle, website development, CI/CD, community management, on and on. It’s a role that requires multiple people to function successfully without burning people out. Training replacements is time-intensive.

    Perhaps most pressing in the Not So Good category is that SIG Docs currently has only one technical writer dedicated full-time to Kubernetes docs. This has impacts on Kubernetes docs: some obvious, some less so.

    Impacts of understaffing on Kubernetes docs

    If Kubernetes continues through 2020 without more technical writers dedicated to the docs, here’s what I see as the most likely possibilities.

    But first, a disclaimer

    Caution: It is very hard to predict, especially the future. -Niels Bohr

    Some of my predictions are almost certainly wrong. Any errors are mine alone.

    That said…

    Effects in 2020

    Current levels of function aren’t self-sustaining. Even with a strong playbook, the release cycle still requires expert support from at least one (and usually two) chairs during every cycle. Without fail, each release breaks in new and unexpected ways, and it requires familiarity and expertise to diagnose and resolve. As chairs continue to cycle—and to be clear, regular transitions are part of a healthy project—we accrue the risks associated with a pool lacking sufficient professional depth and employer support.

    Oddly enough, one of the challenges to staffing is that the docs appear good enough. Based on site analytics and survey responses, readers are pleased with the quality of the docs. When folks visit the site, they generally find what they need and behave like satisfied visitors.

    The danger is that this will change over time: slowly with occasional losses of function, annoying at first, then increasingly critical. The more time passes without adequate staffing, the more difficult and costly fixes will become.

    I suspect this is true because the challenges we face now at decent levels of reader satisfaction are already difficult to fix. API reference generation is complex and brittle; the site’s UI is outdated; and our most consistent requests are for more tutorials, advanced concepts, diagrams, and code samples, all of which require ongoing, dedicated time to create.

    Release support remains strong.

    The release team continues a solid habit of leaving each successive team with better support than the previous release. This mostly takes the form of iterative improvements to the docs release playbook, producing better documentation and reducing siloed knowledge.

    Staleness accelerates.

    Conceptual content becomes less accurate or relevant as features change or deprecate. Tutorial content degrades for the same reason.

    The content structure will also degrade: the categories of concepts, tasks, and tutorials are legacy categories that may not best fit the needs of current readers, let alone future ones.

    Cruft accumulates for both readers and contributors. Reference docs become increasingly brittle without intervention.

    Critical knowledge vanishes.

    As I mentioned previously, SIG Docs has a wide range of functions, some with a steep learning curve. As contributors change roles or jobs, their expertise and availability will diminish or reduce to zero. Contributors with specific knowledge may not be available for consultation, exposing critical vulnerabilities in docs function. Specific examples include reference generation and chair leadership.

    That’s a lot to take in

    It’s difficult to strike a balance between the importance of SIG Docs’ work to the community and our users, the joy it brings me personally, and the fact that things can’t remain as they are without significant negative impacts (eventually). SIG Docs is by no means dying; it’s a vibrant community with active contributors doing cool things. It’s also a community with some critical knowledge and capacity shortages that can only be remedied with trained, paid staff dedicated to documentation.

    What the community can do for healthy docs

    Hire technical writers dedicated to Kubernetes docs. Support advanced content creation, not just release docs and incremental feature updates.

    Thanks, and Happy 2020.