Scheduled Caching

About

Orka Cluster version 3.2 brings a helpful new feature called Scheduled Caching. This feature allows users with admin privileges to download Orka images ahead of time, to any node in the Orka cluster. This means users enjoy a faster and more reliable VM deployment experience, as it helps to reduce delays caused by network bandwidth issues. Without Scheduled Caching, users would likely encounter delays during VM deployment, as automated CI jobs pull new images from the local cluster datastore (NFS mount) or a remote registry (cloud OCI registry service) during CI runtime.

System Requirements

🗒️

Scheduled Caching is Only Available for:

  • Orka Cluster 3.2+
  • Mac computers based on Apple Silicon
  • Updated Orka3 CLI Installer

Overview

The first time a VM is run on a node, the image needs to be cached locally on that node. Previously, there was no user control to bypass this step prior to running automated CI pipeline jobs. Since initial image pull speed is impacted by many variables (image size, network bandwidth, node resource utilization), a build operation using a new VM image deployment can take several minutes to complete and even provide an inconsistent deployment time. The Orka Scheduled Caching feature allows admin users to bypass these typical causes of delay by pre-caching new images on Orka Cluster nodes before CI automation starts.

The cluster knows which nodes have necessary images cached, greatly reducing image loading and scheduled downtime. But in the case where a node cache lacks a needed image, users can use the new feature to pick nodes for cache operations and stipulate VMs deploy on those nodes using those images. In other words, Scheduled Caching is an asynchronous operation.


Key Concepts

  • Scheduled Caching allows preemptive copying of a Orka VM image on any cluster node member, to avoid delays caused by network bandwidth when images are pulled from the cluster NFS mount or from a public registry cloud service.
  • An image is the bits on disk representing a VM that can be used for saving state and sharing modifications.
  • MacStadium base VM Orka images are macOS OCI compliant VM images stored in our public GitHub registry ghcr.io/macstadium/orka-images/ with user credentials user/pwd: admin/admin, have Homebrew package manager installed, orka-vm-tools installed, and have screen sharing enabled and SSH access enabled.
  • Cluster local storage is an NFS mounted filesystem for storing images locally (local registry service).
  • A VM is a virtual runtime on top of the macOS host. The VM runs a guest OS image and macOS supports up to 2 running VMs per cluster node.
  • Sequoia refers to macOS 15.0 the latest public GA release available from Apple’s servers.

🗒️

NOTE:

Orka Cluster 3.0 and later can deploy a VM from multiple image sources: a cloud image datastore (OCI registry service), Orka’s local cluster registry (Cluster NFS mount), and lastly cached images on a cluster node member. Now in Orka 3.2 release there are three distinct CLI commands to display each type of storage and the images available on that specific datastore.

Getting Started

After the cluster nodes are upgraded to version 3.2 users can access the new Scheduled Caching feature and support for Sequoia guest OS VMs. To gain familiarity with Scheduled Caching via the Orka CLI take the following steps:

  1. Run the new orka3 CLI installer package for version 3.2 [[https://orkadocs.macstadium.com/docs/downloads#orka3-cli]]
  2. Connect to the cluster via VPN and login with the IP Plan credentials
  3. Open a Terminal window for shell access
  4. Run orka3 imagecache -h to see the CLI tree structure: commands, subcommands and options/flags
  5. Run orka3 remote-image list to view Orka VM images available on MacStadium's public registry (ghcr.io/macstadium/orka-images/)
  6. Run orka3 image list to view images already downloaded to the cluster local registry (cluster NFS mount)
  7. Run orka3 imagecache list to view images currently stored on Orka Cluster nodes
  8. View the Orka Cluster node names by typing orka3 nodes list
  9. Add a new image to a cluster node orka3 imagecache add
  10. Check the status of a image caching operation orka3 imagecache info
  11. Rapidly deploy a new VM using a recently cached Orka image on a specific node orka3 vm deploy --image <image_name> --node <node_name>

Login to the Orka Cluster

orka3 login

Attempting to automatically open the authorization page in your default browser.
Waiting for successful login... (Press Ctrl+C to abort)
Login successful!

Check Images Available on Each Orka Cluster Storage Type

To view images available on Orka’s GitHub cloud registry repo that typically have the longest deployment times:

orka3 remote-image list

To view images available on your cluster’s NFS mount which is typically the second fastest to deploy:

orka3 image list

Also added to Orka Cluster version 3.2 is the ability view images available and already cached on Orka Cluster Kubernetes node members that deploy nearly instantaneously:

orka3 imagecache list

NAME TAG IMAGE-ID SPACE-USED NODES-COUNT
ghcr.io/user/orka/sonoma_dev_base latest 2683ee69c462 32G 1
ghcr.io/macstadium/orka-images/sequoia latest 39a6b0a1a66e 21G 1
ghcr.io/macstadium/orka-images/sonoma_ios latest ee27c37b1ec5 59G 1
sonoma-90gb-orka3-arm 8fdcb30befc9 17G 1
sonoma-90gb-orka3-arm-nfs 8fdcb30befc9 17G 2
ventura-90gb-orka3-arm 8666ab938a4a 15G 1

🗒️

Note

The alias for imagecache is ic.

Ex. orka3 ic list is the alias equivalent to the above new version 3.2 command.


Check Cluster Node Names

By caching new images on specific nodes of the Orka Cluster, users can enable near spontaneous VM deployment with that image.

From the Orka CLI retrieve the node names:
orka3 nodes list

NAME AVAILABLE-CPU AVAILABLE-MEMORY STATUS arm-mini-001 2 6.40G READY arm-mini-002 5 11.20G READY


Add a New Image to a Node Cache

As an example If an image by the name of ventura-90gb-orka3-arm is needed for a coming CI job and it is not already cached on a cluster node with name arm-mini-001, then type the following command:

orka3 imagecache add ventura-90gb-orka3-arm --nodes arm-mini-001

Run orka3 imagecache info ventura-90gb-orka3-arm to check the image cache status


Check the Node Cache Operation Status

To check the availability of an image cached to specific nodes and/or the status of an active image caching operation, use the orka3 imagecache info command and provide the desired image name:

orka3 imagecache info ventura-90gb-orka3-arm

NODE-NAME IMAGE-ID SPACE-USED STATE arm-mini-001 Caching arm-mini-002 8666ab938a4a 15G Ready

Once the image is cached it shows state Ready and the image can be used for VM deployment.

NODE-NAME IMAGE-ID SPACE-USED STATE arm-mini-001 8666ab938a4a 15G Ready arm-mini-002 8666ab938a4a 15G Ready


Deploy a New VM Using a Recently Cached Image

To use the newly cached image on node arm-mini-002 to deploy a VM, do the following:

  1. Check what VMs are running and determine on which nodes in the cluster. Recall there is an Apple Hypervisor Framework hard coded limit of two VMs per Apple system. The output shows that there is an active VM named vm-pl452 running on node arm-mini-002 so both nodes can run a VM.

orka3 vm list

NAME IP SSH VNC SCREENSHARE STATUS vm-pl452 10.221.188.32 8826 6003 5905 Running

orka3 vm list vm-pl452 --output wide

NAME IP SSH VNC SCREENSHARE STATUS IMAGE CPU MEMORY
vm-pl452 10.221.188.32 8826 6003 5905 Running sonoma-90gb-orka3-arm 3 4.80Gi


  1. Deploy a new VM using the recently cached image ventura-90gb-orka3-arm and run the VM on node node arm-mini-002:

orka3 vm deploy --image ventura-90gb-orka3-arm  --node arm-mini-002

Waiting for VM vm-hpw6n to be deployed.

NAME IP SSH VNC SCREENSHARE STATUS vm-hpw6n 10.221.188.32 8822 5999 5901 Running

🗒️

Note

The duration to deploy new VM vm-hpw6n was ~ 6s using the node cached image.


© 2019-2023 Copyright MacStadium, Inc. – Documentation built with readme.com. Orka is a registered trademark of MacStadium, Inc.