The CI pipeline for our cross-stack monorepo was averaging a 47-minute runtime. Onboarding a new engineer required a full day of environment configuration, often ending in subtle GHC version conflicts or cabal
dependency hell. The root of this inefficiency was clear: our development and build environments for a project combining a Haskell-driven DSL compiler and a SwiftUI-based IDE were mutable, inconsistent, and manually configured. Each developer’s machine, and each CI runner, was a unique island of configuration drift. The solution was not another layer of configuration management, but a fundamental shift to immutable, versioned development environments built with HashiCorp Packer.
Our initial concept was to create a “golden image”—a pre-configured macOS virtual machine containing a locked-down version of Xcode, the correct GHC and Stack versions, and, most importantly, a pre-warmed cache of all our Haskell project’s dependencies. This image would serve as the baseline for both local development within VMware Fusion and ephemeral CI runners. This approach guarantees that every single build, whether on a developer’s laptop or in the CI farm, starts from an identical, pristine state.
The choice of Packer was deliberate. While tools like Ansible or Nix could manage package installations, they operate on an existing, mutable OS. Packer, by contrast, builds the entire machine image from a source ISO, producing a versionable, distributable artifact. This eliminates configuration drift at its source. We chose VMware over other hypervisors due to its robust CLI tooling (vmrun
), performance, and stability in our existing infrastructure. For managing Haskell’s complex dependency graph, Stack was non-negotiable. Its stack.yaml
snapshot mechanism provides the strict build reproducibility that is central to the entire immutable image concept.
The process begins with the Packer template itself, defined using HCL. This file orchestrates the entire image creation, from booting the OS installer to running multiple provisioning scripts.
// macos-haskell-dev.pkr.hcl
variable "macos_iso_path" {
type = string
default = "./iso/Install-macOS-Ventura-13.6.iso"
}
variable "macos_iso_checksum" {
type = string
// Use `shasum -a 256 <iso_file>` to generate
default = "sha256:f1c3a..."
}
variable "vm_name" {
type = string
default = "macos13-haskell-swiftui-dev-env"
}
variable "xcode_version" {
type = string
default = "15.0"
}
variable "stack_resolver" {
type = string
default = "lts-21.22" // Pinned GHC and package set
}
source "vmware-iso" "macos-devbox" {
// Hardware Configuration
guest_os_type = "darwin13-64"
cpus = 4
memory = 8192
disk_size = 122880 // 120GB
vmx_data = {
"board-id.reflectHost" = "TRUE"
"ethernet0.virtualDev" = "vmxnet3"
"smc.version" = "0"
}
// ISO and Installer Configuration
iso_url = var.macos_iso_path
iso_checksum = var.macos_iso_checksum
// Packer needs to communicate with the VM to run provisioners.
communicator = "ssh"
ssh_username = "admin"
ssh_password = "secure_password_here" // Use env vars in production
ssh_port = 22
ssh_timeout = "30m"
// This is the trickiest part of macOS automation.
// Packer sends keyboard commands to navigate the installer GUI.
boot_wait = "5s"
boot_command = [
"<wait10>e", // Select English
"<enter><wait10>",
"/usr/sbin/diskutil partitionDisk /dev/disk0 1 GPT JHFS+ 'macOS' 100%<enter><wait10>",
"/Volumes/Image\\ Volume/Install\\ macOS\\ Ventura.app/Contents/Resources/startosinstall --agreetolicense --volume /Volumes/macOS --nointeraction<enter>",
"<wait30m>" // Wait for installation
]
// Output Artifact
output_directory = "builds/${var.vm_name}"
vm_name = var.vm_name
}
build {
sources = ["source.vmware-iso.macos-devbox"]
// --- Provisioning Stages ---
provisioner "shell" {
inline = [
"echo 'Waiting for SSH to be ready...'",
"sleep 30"
]
}
// Step 1: Base system setup
provisioner "shell" {
script = "./scripts/01-base-setup.sh"
}
// Step 2: Install Xcode and its command line tools
provisioner "shell" {
environment_vars = ["XCODE_VERSION=${var.xcode_version}"]
script = "./scripts/02-xcode-install.sh"
timeout = "2h" // Xcode installation is very slow
}
// Step 3: Install and configure the Haskell toolchain
provisioner "shell" {
environment_vars = ["STACK_RESOLVER=${var.stack_resolver}"]
script = "./scripts/03-haskell-toolchain.sh"
timeout = "1h"
}
// Step 4: Final cleanup
provisioner "shell" {
script = "./scripts/04-cleanup.sh"
}
}
The boot_command
is fragile but necessary. Unlike Linux or Windows, macOS has no standard unattended installation framework like Kickstart or Autounattend.xml. We are essentially scripting keyboard input to navigate the initial GUI installer, partition the disk, and run the command-line installer. This part required significant trial and error.
With the Packer template defined, the core logic resides in the provisioning scripts. The first script handles basic system configuration, enabling SSH for Packer’s communicator and installing Homebrew.
#!/bin/bash
# scripts/01-base-setup.sh
set -euo pipefail
# Enable SSH for the 'admin' user
# This is critical for Packer's provisioners to connect.
echo "Enabling Remote Login..."
systemsetup -setremotelogin on
# Disable sleep to prevent the VM from becoming unresponsive during provisioning
systemsetup -setcomputersleep Never
# Install Homebrew non-interactively
echo "Installing Homebrew..."
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Add brew to path for subsequent scripts
(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/admin/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
# Install essential tools
brew install git wget htop
echo "Base setup complete."
The second, and most time-consuming, provisioner installs Xcode. Automating this is notoriously difficult. We use the xcodes
tool to reliably install a specific version.
#!/bin/bash
# scripts/02-xcode-install.sh
set -euo pipefail
# Source the profile to get Homebrew in PATH
eval "$(/opt/homebrew/bin/brew shellenv)"
echo "Installing xcodes via Homebrew..."
brew install xcodesorg/made/xcodes
echo "Installing Xcode version ${XCODE_VERSION}. This will take a long time..."
# Non-interactive installation requires environment variables for credentials.
# In a real-world project, these should be securely passed into Packer.
# export XCODES_USERNAME="[email protected]"
# export XCODES_PASSWORD="app-specific-password"
xcodes install "${XCODE_VERSION}" --select
# Accept the Xcode license agreement
sudo xcodebuild -license accept
# Install command line tools
echo "Installing Xcode Command Line Tools..."
sudo xcode-select -switch /Applications/Xcode.app/Contents/Developer
softwareupdate --install-rosetta --agree-to-license
echo "Xcode installation complete."
The third provisioner is the heart of the solution. It installs the Haskell toolchain and pre-warms the Stack dependency cache. This single step is what transforms our CI build times.
#!/bin/bash
# scripts/03-haskell-toolchain.sh
set -euo pipefail
eval "$(/opt/homebrew/bin/brew shellenv)"
echo "Installing Haskell toolchain via ghcup..."
export BOOTSTRAP_HASKELL_NONINTERACTIVE=1
export BOOTSTRAP_HASKELL_GHC_VERSION=recommended
export BOOTSTRAP_HASKELL_CABAL_VERSION=recommended
export BOOTSTRAP_HASKELL_STACK_VERSION=latest
curl --proto '=https' --tlsv1.2 -sSf https://get-ghcup.haskell.org | sh
# Add ghcup to the path for the current session
source /Users/admin/.ghcup/env
echo "GHC version: $(ghc --version)"
echo "Stack version: $(stack --version)"
# --- Critical Step: Pre-warming the Stack cache ---
echo "Creating a temporary project to pre-warm dependencies for resolver ${STACK_RESOLVER}..."
mkdir -p /tmp/dep-warmer
cd /tmp/dep-warmer
# This stack.yaml defines all dependencies for our real project.
# It lives inside the Packer build context and is copied into the VM.
# In a real project, this would be generated from the monorepo's package.yaml files.
cat <<EOF > stack.yaml
resolver: ${STACK_RESOLVER}
packages: []
extra-deps:
# List all direct and transitive dependencies of the Haskell compiler/backend.
# This ensures they are compiled and stored in ~/.stack
- aeson-2.1.2.1
- bytestring-0.11.4.0
- containers-0.6.7
- http-client-0.7.13.1
- mtl-2.2.2
- megaparsec-9.3.1
- optparse-applicative-0.17.1.0
- text-2.0.2
- unordered-containers-0.2.19.1
- vector-0.13.0.0
# ... and many more
EOF
echo "Building dependencies... This will download and compile everything."
# The '--only-dependencies' flag is key. We don't build a project,
# we just populate the cache for the specified snapshot.
stack build --only-dependencies
echo "Cleaning up temporary project..."
cd /
rm -rf /tmp/dep-warmer
echo "Haskell toolchain setup and cache warming complete."
The stack build --only-dependencies
command is what we are building this entire infrastructure for. It downloads and compiles potentially hundreds of packages from the Stackage repository, storing the compiled artifacts in /Users/admin/.stack
. When a developer or CI runner later clones the actual project repo inside this VM and runs stack build
, Stack finds that all dependencies are already compiled and moves almost instantly to compiling the application code.
The final piece of the puzzle is demonstrating how the SwiftUI and Haskell components interact, which justifies this complex setup. Our Haskell code exposes functions to a C-compatible ABI using the Foreign Function Interface (FFI).
-- src/Compiler/Interface.hs
{-# LANGUAGE ForeignFunctionInterface #-}
module Compiler.Interface where
import Foreign.C.String ( CString, newCString, peekCString )
import Foreign.C.Types ( CInt(..) )
import qualified Data.Text as T
import qualified Data.Text.Encoding as TE
-- Our hypothetical DSL parsing function
import Compiler.Parser (parseDSL)
-- This function will be callable from Swift.
-- It takes a C string, parses it using our Haskell parser,
-- and returns a C string with the result (or an error message).
foreign export ccall "parse_dsl_source"
parseDslSource :: CString -> IO CString
parseDslSource :: CString -> IO CString
parseDslSource inputCStr = do
inputStr <- peekCString inputCStr
let sourceText = T.pack inputStr
let result = case parseDSL sourceText of
Left err -> "Error: " ++ show err
Right ast -> "Success: " ++ show ast -- Simplified representation
newCString result
This Haskell library is built as a dynamic library (.dylib
) using stack build
. The SwiftUI application then calls this C function through a bridging header.
// In the SwiftUI Project's Bridging Header (Project-Bridging-Header.h)
#ifndef Project_Bridging_Header_h
#define Project_Bridging_Header_h
// Declare the C function exported from our Haskell library.
// The name must match the 'foreign export ccall' name.
const char* parse_dsl_source(const char* source);
#endif /* Project_Bridging_Header_h */
And finally, the Swift code that invokes the Haskell backend:
// ContentView.swift
import SwiftUI
struct ContentView: View {
@State private var dslInput: String = "(define x 10)"
@State private var parseResult: String = ""
var body: some View {
VStack {
TextEditor(text: $dslInput)
.font(.system(.body, design: .monospaced))
.border(Color.gray, width: 1)
.padding()
Button("Parse DSL") {
// The core interaction: calling Haskell from Swift
self.parseResult = self.invokeHaskellParser(source: self.dslInput)
}
.padding()
Text("Result from Haskell backend:")
Text(parseResult)
.font(.system(.body, design: .monospaced))
.foregroundColor(.green)
.frame(maxWidth: .infinity, alignment: .leading)
.padding()
}
.padding()
}
/// A wrapper function to safely call the C interface of the Haskell library.
/// It handles the conversion between Swift's String and C's char pointers.
private func invokeHaskellParser(source: String) -> String {
// The linker must be configured to find the .dylib produced by `stack build`.
guard let cResult = parse_dsl_source(source) else {
return "Error: Haskell function returned null."
}
// Convert the C string result back to a Swift String.
let swiftResult = String(cString: cResult)
// Important: C functions that return strings allocated with `newCString` in Haskell
// often require the caller to free the memory. In a production app,
// we'd export another Haskell function `free_string(CString)` to handle this.
// For this example, we assume a memory leak for simplicity.
return swiftResult
}
}
The overall workflow, from code push to a running VM, is now fully automated.
graph TD A[Developer pushes to Git] --> B{CI/CD Trigger}; B --> C[Packer Build Job]; C -- 1. Starts from macOS ISO --> D[VMware VM]; D -- 2. Runs Provisioners --> E[Installs Xcode & Haskell]; E -- 3. Warms Stack Cache --> F[Finalized VM Image]; C --> G{Upload Artifact}; G -- .vmx bundle --> H[vSphere/Artifactory]; subgraph "Usage" I[Developer] -- Pulls image --> J[Runs VM locally]; K[CI Runner] -- Clones image --> L[Runs Build & Test]; end H --> I; H --> K;
The final result is a versioned .vmx
bundle, typically 80-100GB in size, stored in our artifact repository. CI job times plummeted from over 45 minutes to under 10, as the lengthy dependency compilation phase was eliminated. Developer onboarding now consists of two steps: install VMware Fusion and download the latest image. The “works on my machine” class of bugs has been entirely eradicated.
This solution, however, is not without its own set of trade-offs and limitations. The sheer size of the VM images presents significant storage costs and network transfer time challenges. The Packer build process itself is slow, often taking two hours to complete, which makes iterating on the base environment a cumbersome task. This entire workflow is also architected for Intel-based macOS virtualization; replicating it for Apple Silicon requires a completely different toolchain, as ARM-based macOS virtualization is more restricted. Furthermore, managing secrets, such as Git credentials for private Haskell dependencies or Apple Developer IDs for Xcode, within the Packer build process requires a secure injection mechanism like Vault, as hardcoding them into scripts is a significant security risk.