Kubernetes Persistent Volumes
How to tap into Kubernetes persistent volumes for your Orka environment.
For security reasons, Orka does not let you configure persistent volumes yourself. The MacStadium team needs to do that for you. However, when a persistent volume is configured for your environment, you can create persistent volume claims and deploy pods that consume the respective persistent volume.
Quick command summary
brew install kubectl
orka kube [create / get] --account NAME -y
export KUBECONFIG=$(PWD)/kubeconfig-orka
kubectl config view
kubectl apply -f *.yaml --namespace=sandbox
kubectl get [pods / pvc]
kubectl describe
kubectl delete
Apple ARM-based Nodes Support
Deploying Kubernetes resources is currently supported on Intel nodes only.
Read more about Apple ARM-based Support to see which commands and options are supported for Apple ARM-based nodes.
Limitations
Persistent volumes are not applicable to standard Orka VMs. They can be consumed only by pods deployed with kubectl
, and are called by functions such as attach-disk
.
If you want to persist the storage of a standard Orka VM, use the image commit
or save
operations. For more information, see Orka Documentation: Create or update an image from a deployed VM.
Step 1: Request a persistent volume
Contact the MacStadium team and request a persistent volume (PV) for your Orka environment. Work closely with the team to help them create a PV that matches your requirements.
Step 2: Get Kubernetes-ready
You need to install kubectl
and create a kube account for your Orka environment.
- If not already installed, install kubectl locally. For example:
brew install kubectl
- Create a kube account and export the
kubeconfig
. Alternatively, if you already have a kube account, get the respectivekubeconfig
and export it.
orka kube create --account <NAME> -y
OR
orka kube get --account <NAME> -y
- Export the resulting
kubeconfig
and verify thatkubectl
is configured properly.
export KUBECONFIG=$(pwd)/kubeconfig-orka
kubectl config view
If configured properly, you will see a similar output:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.10.10.99:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: sandbox
user: mykubeuser
name: mykubeuser@kubernetes
current-context: mykubeuser@kubernetes
kind: Config
preferences: {}
users:
- name: mykubeuser
user:
token: eyJhbGciOiJSUz...
Kubeconfig lost?
Sometimes, after a system or a terminal restart, you might need to re-get and re-export the
kubeconfig
for your account. Runorka kube get --account NAME -y
followed byexport KUBECONFIG=$(pwd)/kubeconfig-orka
.
Step 3: Create the persistent volume claim
A persistent volume claim (PVC) lets you tap into your persistent volume and consume it. You need to create a basic yaml
manifest for the PVC and apply it to the environment.
- Create the PVC manifest. For more information, see Kubernetes Documentation: PersistentVolumeClaims. For example:
# pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
The values for metadata:name
and metadata:namespace
must match the values for claimRef:name
and claimRef:namespace
declared in the manifest of the persistent volume. Double-check with the MacStadium team for these values.
- Apply the PVC. Replace
pvc.yaml
with the complete file path to your own PVC manifest.
kubectl apply -f pvc.yaml --namespace=sandbox
- Verify that the persistent volume claim is bound to the persistent volume.
kubectl get pvc
If the persistent volume claim works as expected, you will see a similar output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound my-pv 20Gi RWO 13s
Status Pending?
If the status is
Pending
instead ofBound
, double-check your PVC manifest, fix any naming issues, remove the oldpvc
withkubectl delete pvc NAME
, and re-apply the fixed manifest. If the problem persists, contact the MacStadium team.
Step 4: Deploy a pod that uses the persistent volume
Now that you have created a PVC and bound it to the PV, you can deploy a pod that uses the PV. Create a pod manifest and apply it.
- Create the pod manifest. The pod needs to reference both the PV and the PVC. For example:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: my-pv
persistentVolumeClaim:
claimName: mypvc
containers:
- name: mypod
image: ubuntu
command: ["/bin/bash", "-ec", "while :; do echo '.'; sleep 5 ; done"]
volumeMounts:
- mountPath: "/usr/share/mypod"
name: my-pv
restartPolicy: Never
This example deploys a Linux VM. Pay attention to the command
line. Without it, the state of your Linux VM will become Stopped
.
- Apply the pod. Replace
mypod.yaml
with the complete file path to your pod manifest.
kubectl apply -f mypod.yaml --namespace=sandbox
- Verify that the pod is deployed and running.
kubectl get pods
If the pod works as expected, you will see a similar output:
NAME READY STATUS RESTARTS AGE
pod/mypod 1/1 Running 0 12s
- Verify that the pod uses the claim and the persistent volume. Look for the data listed for
Volumes
.
kubectl describe pod <NAME>
(Optional) Step 5: Deploy a service to handle the networking between your pods and your Orka VMs
If you want to have connectivity between your Orka VMs and any pods deployed with kubectl
, you need to deploy a networking service. For more information, see Kubernetes Documentation: Service.
Make sure to use the networking information provided your Orka IP Plan when assigning IPs.
What's next: Delete the PVC and release the PV
When you no longer need to use a PVC and the respective PV, you can delete the PVC to release the PV.
- Delete the PVC.
kubectl delete pvc <NAME>
- Contact the MacStadium team.
- If you want to reclaim the storage, an administrator might need to clean it up and verify that it's available for use again. This would depend on the provisioning type and the reclaim policy for the PV.
- If you no longer need the storage, an administrator can remove the PV.
Updated about 2 years ago