Introduction Persistent Volume and Persistent Volume Claim
Managing storage is a distinct problem from managing compute. The PersistentVolume
subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume
and PersistentVolumeClaim
.
A PersistentVolume
(PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.
A PersistentVolumeClaim
(PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted once read/write or many times read-only).
While PersistentVolumeClaims
allow a user to consume abstract storage resources, it is common that users need PersistentVolumes
with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes
that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass
resource.
See the detailed walkthrough with working examples.
Install and configurate Ceph Client on K8S host machine
## Install ceph-common on Centos 7
yum install -y ceph-common
## enable module rbd on host machine
modprobe rbd
## setting ceph keyring for host machine
# create /etc/ceph folder
mkdir -p /etc/ceph
## copy or create keyring for client ceph.client.client.keyring
Create new pool storage for Kubernestes:
ceph osd pool create k8s-volumes 100 replicated
Create new Ceph user for creating image from K8s
ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rw pool=k8s-volumes'
Create Ceph admin secret
- Run the
ceph auth get-key client.admin | base64
command on a Ceph MON node to display the key value for theclient.admin
user:
#ceph auth get-key client.admin | base64
QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
Ceph Secret Definition
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-system
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ==
type: kubernetes.io/rbd
2. Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:
$ kubectl create -f ceph-secret.yaml
secret "ceph-secret" created
3. Verify that the secret was created:
# kubectl get secret ceph-secret
NAME TYPE DATA AGE
ceph-secret kubernetes.io/rbd 1 5d
Create Ceph user secret
- Run the
ceph auth get-key client.k8s | base64
command on a Ceph MON node to display the key value for theclient.k8s
user:
#ceph auth get-key client.k8s | base64
QVFEa2VxNWRuQXhWQmhBQU11MDJaNkZ6YjUydmxffakdZOFM1R2c9PQ==
Ceph Secret Definition
apiVersion: v1
kind: Secret
metadata:
name: ceph-user-secret
namespace: default
data:
key: QVFEa2VxNWRuQXhWQmhBQU11MDJaNkZ6YjUydmxLakdZOFM1R2c9PQ==
type: kubernetes.io/rbd
2. Save the secret definition to a file, for example ceph-user-secret.yaml, then create the secret:
$ kubectl create -f ceph-user-secret.yaml
secret "ceph-user-secret" created
3. Verify that the secret was created:
# kubectl get secret ceph-user-secret
NAME TYPE DATA AGE
ceph-user-secret kubernetes.io/rbd 1 5d
Create storage class
- Storage class definition
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: dynamic
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: mon1:6789,mon2:6789,mon3:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: k8s-volumes
userId: k8s
userSecretName: ceph-user-secret
2. Create storage class:
$ kubectl create -f ceph-storageclass.yaml
Sample app with volume:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
namespace: k8s
spec:
ports:
— port: 80
name: web
clusterIP: None
selector:
app: nginx
— -
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
namespace: k8s
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: “nginx”
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
— name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
— containerPort: 80
name: web
volumeMounts:
— name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
— metadata:
name: www
spec:
accessModes: [ “ReadWriteOnce” ]
storageClassName: “cephrdb”
resources:
requests:
storage: 1Gi