Apple Silicon-based Monterey VMs
Starting with Orka 2.4.0, shared VM storage is deprecated for Apple Silicon-based VMs running macOS Monterey. Intel-based Monterey VMs are not affected.
In Orka 2.5.0, shared VM storage will be removed for all Apple Silicon-based Monterey VMs. Intel-based Monterey VMs will not be affected. To continue using shared VM storage with Orka 2.5.0 and later, you will need to upgrade your Apple Silicon-based Monterey VMs to macOS Ventura, OR switch to Intel-based Monterey VMs.
Starting with Orka 1.6.0, all deployed Intel-based VMs will have access to a shared storage volume in the cluster. Starting with Orka 2.1.0, all deployed ARM-based VMs will have access to the same shared storage.
This storage can be used to cache build artifacts in-between stages of your CI/CD pipeline, for example, or host Xcode installers and other build dependencies.
Orka offers two different ways to utilize shared VM storage:
- By default, the VM shared storage directory will be placed on the primary NFS storage export for your cluster. This means that VM shared storage will share storage space with VM images and ISOs, so please keep this in mind!
- Optionally, you may request to provision a secondary storage export that will be dedicated to shared storage. This is ideal if you plan to share a lot of data between your CI/CD pipeline builds.
In ARM-based VMs the shared storage will be automatically mounted and available to use. Same storage is shared between ARM-based and Intel-based VMs.
To use the shared VM storage with VMs deployed on ARM nodes, make sure to pull the new
90GBMontereySSH.orkasiimage from the remote. It contains Orka VM Tools which are required for the shared VM storage to be automounted in the VM.
Orka VM Tools 2.2.0 introduce a breaking change to the Shared VM Storage feature when used with Orka versions 2.1.0 and 2.1.1. As a workaround, make sure to use
90GBMontereySSH-2.1.orkasi) or upgrade your cluster to Orka 2.2.0.
To mount the shared storage volume, run the following command from within the VM:
sudo mount_9p orka
The volume will be mounted at
/Volumes/orka. The first time you attempt to access the filesystem via Terminal, you will be asked to grant the Terminal application permissions to access files on a network volume:
OK button to allow access. You will then be able to access files on the volume:
Instead of mounting the shared storage manually after every OS restart, you can create a
/Library/LaunchDaemons/com.mount9p.plist to handle automounting the shared storage.
- Connect to your VM via SSH.
ssh <macOS_user>@<VM_IP> -p <SSH_PORT>
- Make sure that
/Volumes/orkais already mounted on the VM.
- If not already mounted, mount the shared VM storage.
sudo mount_9p orka
- Navigate to
/Library/LaunchDaemonsand create a
cd /Library/LaunchDaemons ls sudo vim com.mount9p.plist
- Copy the following contents and paste them in Vim. Type
:wqto save and exit.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.mount9p.plist</string> <key>RunAtLoad</key> <true/> <key>StandardErrorPath</key> <string>/var/log/mount_9p_error.log</string> <key>StandardOutPath</key> <string>/var/log/mount_9p.log</string> <key>ProgramArguments</key> <array> <string>/bin/bash</string> <string>-c</string> <string>mkdir -p /Volumes/orka && mount_9p orka</string> </array> </dict> </plist>
- Change the ownership and permissions for
sudo chown root:wheel /Library/LaunchDaemons/com.mount9p.plist sudo chmod 600 /Library/LaunchDaemons/com.mount9p.plist
- Reboot the VM and save or commit the VM image.
You may encounter permissions issues when reading or writing data to the shared storage volume. In order to get around this, you may need to become the root user to write data to the shared storage volume.
If you need to give a specific user read and write access to files, you can add that user to the group
107. For example, if your CI user is called
machine-user create the group
sudo dscl . create /Groups/ci sudo dscl . create /Groups/ci gid 107 sudo dscl . create /Groups/ci passwd '*' sudo dscl . create /Groups/ci GroupMembership machine-user
Confirm the above changes were made with the command
dscl . read /Groups/ci. Reboot the virtual machine and save or commit the VM image to persist these changes.
Files must be given group write access to be modified by the user you have added to the
107group. For example,
sudo chmod g+w myfile.txt.
If you are using the default primary storage export in your cluster for shared VM storage, keep in mind that this storage is also used to host your Orka VM images and other data. This should be acceptable for sharing a limited set of files between virtual machines but is not recommended for intensive IO. In the case that you require frequent use of multiple reads and writes to the shared storage volume, setting up dedicated secondary storage is highly recommended.
When connecting to the VM over SSH and attempting to access the shared storage volume, you may encounter the error
orka: Operation not permitted:
To fix this issue, connect to the VM via VNC and navigate to
System Preferences → Security & Privacy and click on the
Privacy tab. From the list select
Full Disk Access and click the padlock in the lower lefthand corner to make changes:
You will then be prompted to enter your password. Next, click the checkbox next to
Click the padlock again to prevent further changes. You should now be able to access the shared storage volume over an SSH connection.
Make sure to save or commit the VM image after completing the above steps to persist changes.
Updated about 10 hours ago