Volumes

K8s volumes tips and tricks.

Persistent storage

Create StorageClass

Create the yml file:

cat > /tmp/storage-class.yml <<EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

EOF

Deploy it:

kubectl create -f /tmp/storage-class.yml

name: local-storage will be the reference used in the PersistentVolume and PersistentVolumeClaim. You can change the StorageClass name, but remember to update all references below.

Create PersistentVolume

SSH to your Master node and create the folder which will store the volume:

mkdir /mnt/disks/vol1

Go back to your kubectl workstation.

Create the yml file:

cat > /tmp/persistent-volume.yml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-local-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/disks/vol1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - minikube

EOF

Deploy it:

kubectl create -f /tmp/persistent-volume.yml

Create PersistentVolumeClaim

Create the yml file:

cat > /tmp/persistent-volume-claim.yml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: local-storage
  resources:
    requests:
      storage: 5Gi

EOF

Deploy it:

kubectl create -f /tmp/persistent-volume-claim.yml

if by any chance you omitted storageClassName: local-storage then you need to "patch" the StorageClass first:

kubectl patch storageclass local-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Check if you do not have a default StorageClass patched already, other wise you might see the following error when creating PersistentVolumeClaim:

persistentvolumeclaims "example-local-claim" is forbidden: Internal error occurred: 2 default StorageClasses were found

Deploy a pod using the volume

Create the yml file:

cat > /tmp/mysql.yml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:5.6
          name: mysql
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: 123456
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: example-local-claim

EOF

Deploy it:

kubectl create -f /tmp/mysql.yml

References

https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/

https://stackoverflow.com/questions/52975887/digitalocean-pod-has-unbound-immediate-persistentvolumeclaims

Persistent storage (AWS)

export AWS_DEFAULT_REGION=ap-southeast-2
export AWS_REGION=ap-southeast-2
aws ec2 create-volume \
    --size=100 \
    --volume-type=gp2 \
    --availability-zone=ap-southeast-2a
    
# aws ec2  delete-volume --volume-id vol-05e2481f9b7668e69
# StorageClass

kubectl create -f - <<EOF

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  zone: ap-southeast-2a
reclaimPolicy: Retain
mountOptions:
  - debug
volumeBindingMode: Immediate

EOF

# PersistentVolume

kubectl create -f - <<EOF

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    volumeID: vol-0d3a5237c9c19eb7e
    fsType: ext4
    
EOF

# PersistentVolumeClaim

kubectl create -f - <<EOF

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
    
EOF

# Pod

kubectl create -n default -f - <<EOF

kind: Pod
apiVersion: v1
metadata:
  name: task-pod
spec:
  volumes:
    - name: task-volume
      persistentVolumeClaim:
       claimName: task-pvc
  containers:
    - name: task-container
      image: mysql:5.6
      ports:
        - containerPort: 3306
          name: "http-server"
      volumeMounts:
        - mountPath: "/var/lib/mysql"
          name: task-volume
    
EOF

#

#

#

References

https://portworx.com/basic-guide-kubernetes-storage/

https://itnext.io/efs-persistent-volumes-on-aws-kubernetes-193e0035bbfb

https://kubernetes.io/docs/setup/scratch/#apiserver-pod-template

https://docs.google.com/document/d/17d4qinC_HnIwrK0GHnRlD1FKkTNdN__VO4TH9-EzbIY/edit

K8S AWS Cloud Provider Notes
Author: Joe Beda, Heptio (joe@heptio.com)
Date: 2017-02-14
Updated: 2017-03-25

The AWS Cloud Provider does 2 main things:
Enables mounting/unmounting of EBS volumes
The master drives attaching/detaching the volume to the VM
The kubelet on the VM handles mounting/unmounting and formatting the volume
Creates ELBs and security groups to allow those LBs to connect through

To enable the AWS cloud provider you need to do the following:
Add --cloud-provider=aws to the API Server, Controller Manager and every Kubelet.
If you are using kubeadm, you can use the kubeadm config file to specify the Cloud Provider to configure during kubeadm init. This will add the right flag to the API Server and Controller Manager.  To add it to the kubelet, you need to drop in a file as /etc/systemd/system/kubelet.service.d/20-cloud-provider.conf containing: 
[Service]
Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws"
There is a --cloud-config flag for specifying a cloud provider specific config file.  This is generally unneeded.
Set a tag on the following resources with a key in the form of kubernetes.io/cluster/<cluster name>; the value is immaterial.
All instances
One and only one SG for each instance should be tagged.  This will be modified as necessary to allow ELBs to access the instance
Set up IAM Roles for nodes
For the master, you want a policy like this (CloudFormation snippet):
         Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - ec2:*
            - elasticloadbalancing:*
            - ecr:GetAuthorizationToken
            - ecr:BatchCheckLayerAvailability
            - ecr:GetDownloadUrlForLayer
            - ecr:GetRepositoryPolicy
            - ecr:DescribeRepositories
            - ecr:ListImages
            - ecr:BatchGetImage
            - autoscaling:DescribeAutoScalingGroups
            - autoscaling:UpdateAutoScalingGroup
            Resource: "*"
ec2:* may be overkill here but I haven’t done the work to narrow it down.
For nodes:
       PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - ec2:Describe*
            - ecr:GetAuthorizationToken
            - ecr:BatchCheckLayerAvailability
            - ecr:GetDownloadUrlForLayer
            - ecr:GetRepositoryPolicy
            - ecr:DescribeRepositories
            - ecr:ListImages
            - ecr:BatchGetImage
            Resource: "*"
Note the * in ec2:Describe*

It is important that the node name (as seen in kubectl get nodes) match the private-dns-name property of the EC2 instance.  The cloud provider uses this to look up the instance ID for a node as necessary.  This needs to be done by fetching the local hostname from the metadata server.  If something is misconfigured you’ll see errors in the API Server of KCM logs.  

A one liner to use on the `kubeadm init/join` command line:

--node-name="$(hostname -f 2>/dev/null || curl http://169.254.169.254/latest/meta-data/local-hostname)"

The nodes should read various metadata from the AWS API (such as their zone and instance type) and hoists those to be labels on the node object in the Kubernetes API.
AWS specific Service Annotations
There is a set of annotations that can be set on Kubernetes service objects that impact how the Cloud Provider sets up ELBs.  These are really only documented right now in the source code.  Inspect it directly for ideas on what can be done.
Scoping down ec2:*
This is untested but it looks, from code inspection, like the following are needed:
ec2:DescribeInstances
ec2:DescribeSecurityGroups
ec2:AttachVolume
ec2:DetachVolume
ec2:DescribeVolumes
Needed if you let k8s provision EBS volumes for you
ec2:CreateVolume
ec2:DeleteVolume
ec2:DescribeSubnets
ec2:CreateSecurityGroup
ec2:DeleteSecurityGroup
ec2:AuthorizeSecurityGroupIngress
ec2:RevokeSecurityGroupIngress
ec2:CreateTags
ec2:DescribeRouteTables
Needed if you have the k8s master allocate node cidrs and configure cloud routes
ec2:CreateRoute
ec2:DeleteRoute
ec2:ModifyInstanceAttribute

Last updated