Intel Nodes Only
This guide is applicable to VMs deployed on Intel nodes only. Read more about Apple ARM-based Support (Beta) to see which commands and options are supported for VMs deployed on Apple ARM-based nodes.
Starting with Orka 1.6.0, all deployed VMs will have access to a shared storage volume in the cluster. This storage can be used to cache build artifacts in-between stages of your CI/CD pipeline, for example, or host Xcode installers and other build dependencies.
Orka offers two different ways to utilize shared VM storage:
- By default, the VM shared storage directory will be placed on the primary NFS storage export for your cluster. This means that VM shared storage will share storage space with VM images and ISOs, so please keep this in mind!
- Optionally, you may request to provision a secondary storage export that will be dedicated to shared storage. This is ideal if you plan to share a lot of data between your CI/CD pipeline builds.
To mount the shared storage volume, run the following command from within the VM:
sudo mount_9p orka
The volume will be mounted at
/Volumes/orka. The first time you attempt to access the filesystem via Terminal, you will be asked to grant the Terminal application permissions to access files on a network volume:
OK button to allow access. You will then be able to access files on the volume:
In order to automount the shared storage volume at system boot, create the file
/Library/LaunchDaemons/com.mount9p.plist as follows:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.mount9p.plist</string> <key>RunAtLoad</key> <true/> <key>StandardErrorPath</key> <string>/var/log/mount_9p_error.log</string> <key>StandardOutPath</key> <string>/var/log/mount_9p.log</string> <key>ProgramArguments</key> <array> <string>/bin/bash</string> <string>-c</string> <string>mkdir -p /Volumes/orka && mount_9p orka</string> </array> </dict> </plist>
Change the file ownership and permissions:
sudo chown root:wheel /Library/LaunchDaemons/com.mount9p.plist sudo chmod 600 /Library/LaunchDaemons/com.mount9p.plist
Reboot the virtual machine for changes to take effect and save or commit the VM image.
You may encounter permissions issues when reading or writing data to the shared storage volume. In order to get around this, you may need to become the root user to write data to the shared storage volume.
If you need to give a specific user read and write access to files, you can add that user to the group
107. For example, if your CI user is called
machine-user create the group
sudo dscl . create /Groups/ci sudo dscl . create /Groups/ci gid 107 sudo dscl . create /Groups/ci passwd '*' sudo dscl . create /Groups/ci GroupMembership machine-user
Confirm the above changes were made with the command
dscl . read /Groups/ci. Reboot the virtual machine and save or commit the VM image to persist these changes.
Files must be given group write access to be modified by the user you have added to the
107group. For example,
sudo chmod g+w myfile.txt.
If you are using the default primary storage export in your cluster for shared VM storage, keep in mind that this storage is also used to host your Orka VM images and other data. This should be acceptable for sharing a limited set of files between virtual machines but is not recommended for intensive IO. In the case that you require frequent use of multiple reads and writes to the shared storage volume, setting up dedicated secondary storage is highly recommended.
When connecting to the VM over SSH and attempting to access the shared storage volume, you may encounter the error
orka: Operation not permitted:
To fix this issue, connect to the VM via VNC and navigate to
System Preferences → Security & Privacy and click on the
Privacy tab. From the list select
Full Disk Access and click the padlock in the lower lefthand corner to make changes:
You will then be prompted to enter your password. Next, click the checkbox next to
Click the padlock again to prevent further changes. You should now be able to access the shared storage volume over an SSH connection.
Make sure to save or commit the VM image after completing the above steps to persist changes.
Updated over 1 year ago