Helm Deployment Guide
Deploy PRECINCT on Kubernetes using a thin Helm chart that wraps the existing kustomize overlays. Helm provides values-driven configuration and lifecycle management while kustomize remains the source of truth for all Kubernetes manifests.
Overview
The PRECINCT Helm chart follows a
thin chart + kustomize post-renderer pattern.
Rather than duplicating manifests as Helm templates, the chart uses
values.yaml for configuration and delegates manifest
generation to kustomize. This approach provides three benefits:
- Single source of truth: kustomize overlays remain the canonical manifests. No drift between Helm and raw kustomize deployments.
-
Values-driven overrides: Helm's
values.yamlprovides a clean interface for environment-specific configuration without editing kustomization files. -
Lifecycle management: Helm provides
install,upgrade,rollback, anduninstalllifecycle hooks that kustomize alone does not offer.
You do not need Helm to deploy PRECINCT. The kustomize overlays
work standalone with kustomize build | kubectl apply.
Helm adds convenience for teams that prefer values-driven
configuration and release management.
Architecture
The Helm chart uses a kustomize post-renderer to bridge Helm's templating engine with kustomize's overlay system. The data flow is:
values.yaml
|
v
Helm Template Engine
|
| (renders kustomization.yaml selecting the correct overlay)
v
Post-Renderer (kustomize-render.sh)
|
| (stdin: Helm output --> kustomize build --> stdout: final manifests)
v
kubectl apply
|
v
Kubernetes API Server
The post-renderer script receives Helm's rendered output on stdin,
writes it to a temporary directory alongside the kustomize overlays,
runs kustomize build, and outputs the final manifests
to stdout. This means Helm manages the release lifecycle while
kustomize handles all manifest patching, configmap generation, and
overlay selection.
Quick Start
Install PRECINCT with the default local environment preset. This deploys the full stack to a Docker Desktop Kubernetes cluster.
Install
cd POC
# Install with the local preset (default)
helm install precinct charts/precinct/ \
--post-renderer charts/precinct/post-renderer/kustomize-render.sh
# Or install with a specific environment
helm install precinct charts/precinct/ \
-f charts/precinct/values-dev.yaml \
--post-renderer charts/precinct/post-renderer/kustomize-render.sh
Upgrade
helm upgrade precinct charts/precinct/ \
-f charts/precinct/values-prod.yaml \
--post-renderer charts/precinct/post-renderer/kustomize-render.sh
Status
helm status precinct
helm get values precinct
Uninstall
helm uninstall precinct
Chart Structure
The Helm chart lives in POC/charts/precinct/ and has
the following structure:
POC/charts/precinct/
|-- Chart.yaml # Chart metadata (name, version, description)
|-- values.yaml # Default values (local environment)
|-- values-local.yaml # Local environment overrides
|-- values-dev.yaml # Dev environment overrides
|-- values-staging.yaml # Staging environment overrides
|-- values-prod.yaml # Production environment overrides
|-- templates/
| |-- _helpers.tpl # Template helper functions
| |-- kustomization.yaml # Rendered kustomization selecting overlay
|-- post-renderer/
|-- kustomize-render.sh # Bridge script: Helm -> kustomize -> kubectl
Note the absence of traditional Helm templates for Deployments,
Services, and ConfigMaps. Those live in the kustomize base and
overlays under POC/infra/eks/. The chart's only
template is a kustomization.yaml that points to the
correct overlay based on .Values.global.environment.
Values Reference
The following tables document all configurable values organized by group. Default values match the local environment preset.
Global
| Key | Default | Description |
|---|---|---|
global.environment |
local |
Target environment: local, dev, staging, prod |
global.trustDomain |
agentic-ref-arch.poc |
SPIFFE trust domain for all workload identities |
global.imageRegistry |
"" |
Container registry prefix (e.g., ghcr.io/org) |
Gateway
| Key | Default | Description |
|---|---|---|
gateway.replicas |
1 |
Number of gateway pod replicas |
gateway.image.repository |
mcp-security-gateway |
Gateway container image |
gateway.image.tag |
latest |
Image tag |
gateway.resources.requests.cpu |
50m |
CPU request |
gateway.resources.requests.memory |
64Mi |
Memory request |
gateway.resources.limits.cpu |
100m |
CPU limit |
gateway.resources.limits.memory |
128Mi |
Memory limit |
gateway.env.logLevel |
debug |
Log verbosity: debug, info, warn, error |
gateway.env.spiffeMode |
dev |
SPIFFE mode: dev (header) or prod (mTLS) |
gateway.env.enforcementProfile |
dev |
Enforcement: dev, prod_standard, prod_regulated_hipaa |
gateway.env.rateLimitRPM |
600 |
Rate limit: requests per minute per identity |
gateway.env.rateLimitBurst |
100 |
Token bucket burst size |
SPIRE
| Key | Default | Description |
|---|---|---|
spire.server.replicas |
1 |
SPIRE server replicas (StatefulSet) |
spire.server.storageSize |
1Gi |
PVC size for SPIRE server data |
spire.agent.nodeAttestor |
join_token |
Node attestor: join_token or k8s_psat |
SPIKE
| Key | Default | Description |
|---|---|---|
spike.nexus.replicas |
1 |
SPIKE Nexus replicas |
spike.nexus.backendStore |
sqlite |
Backend: memory, sqlite, lite |
spike.nexus.storageSize |
1Gi |
PVC size for Nexus SQLite data |
spike.keeper.shamirThreshold |
1 |
Shamir secret sharing threshold |
spike.keeper.shamirShares |
1 |
Total Shamir shares |
Observability
| Key | Default | Description |
|---|---|---|
observability.phoenix.enabled |
true |
Enable Phoenix tracing UI |
observability.opensearch.enabled |
false |
Enable OpenSearch audit indexing |
observability.otelEndpoint |
host.docker.internal:4317 |
OpenTelemetry collector gRPC endpoint |
Services, Network Policies, Storage, Security
| Key | Default | Description |
|---|---|---|
services.keydb.enabled |
true |
Deploy KeyDB for session/rate-limit state |
services.mcpServer.replicas |
1 |
MCP tool server replicas |
networkPolicies.enabled |
true |
Deploy default-deny NetworkPolicies |
storage.storageClass |
"" |
StorageClass (empty = cluster default) |
security.podSecurityStandards |
privileged |
PSS level: privileged, baseline, restricted |
admissionControl.gatekeeper.enabled |
true |
Deploy OPA Gatekeeper constraints |
admissionControl.sigstore.enabled |
false |
Enable sigstore/policy-controller |
Environment Presets
Each environment preset is a values-<env>.yaml
file that overrides the base values.yaml. Use the
-f flag to select a preset during install or upgrade.
| Setting | local | dev | staging | prod |
|---|---|---|---|---|
| Gateway replicas | 1 | 1 | 2 | 3 |
| MCP server replicas | 1 | 1 | 1 | 2 |
| Service type | NodePort (30090) | ClusterIP | ClusterIP | ClusterIP |
| SPIFFE mode | dev | dev | prod | prod |
| Enforcement profile | dev | dev | prod_standard | prod_standard |
| Log level | debug | debug | info | info |
| PSS enforcement | privileged | privileged | baseline | restricted |
| Network policies | enabled | enabled | enabled | enabled |
| Sigstore enforcement | disabled | disabled | enabled | enabled |
| Gateway CPU limit | 100m | 200m | 500m | 1000m |
| Gateway memory limit | 128Mi | 128Mi | 256Mi | 512Mi |
Kustomize-Native Path
Teams that do not need Helm's lifecycle management can use kustomize
directly. The overlays in POC/infra/eks/overlays/ are
self-contained and do not depend on Helm.
# Build and apply a specific overlay
kustomize build infra/eks/overlays/local/ | kubectl apply -f -
# Or use Make targets
make k8s-up # Local overlay (Docker Desktop)
make k8s-down # Tear down
Available overlays:
-
overlays/base/: Shared base resources (gateway, MCP server, namespaces, service accounts) -
overlays/local/: Docker Desktop: NodePort, join token attestor, reduced resources, mock MCP server -
overlays/dev/: Development: 1 replica, debug logging, dev image tags -
overlays/staging/: Staging: 2 replicas, prod_standard enforcement, mTLS -
overlays/prod/: Production: 3 replicas, HA with PDBs, structured audit logging
CI/CD Integration
PRECINCT's Helm chart works with GitOps controllers and CI pipelines. Below are examples for the most common tools.
ArgoCD
# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: precinct
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/RamXX/agentic_reference_architecture.git
targetRevision: main
path: POC/charts/precinct
helm:
valueFiles:
- values-prod.yaml
destination:
server: https://kubernetes.default.svc
syncPolicy:
automated:
prune: true
selfHeal: true
Flux
# flux-helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: precinct
namespace: flux-system
spec:
interval: 10m
chart:
spec:
chart: POC/charts/precinct
sourceRef:
kind: GitRepository
name: agentic-ref-arch
interval: 5m
valuesFrom:
- kind: ConfigMap
name: precinct-values
valuesKey: values-prod.yaml
GitHub Actions
# .github/workflows/deploy.yml
name: Deploy PRECINCT
on:
push:
branches: [main]
paths: ['POC/charts/**', 'POC/infra/**']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure kubectl
uses: azure/setup-kubectl@v3
- name: Install Helm
uses: azure/setup-helm@v4
- name: Deploy
run: |
cd POC
helm upgrade --install precinct charts/precinct/ \
-f charts/precinct/values-${{ env.ENVIRONMENT }}.yaml \
--post-renderer charts/precinct/post-renderer/kustomize-render.sh \
--wait --timeout 10m
Upgrade and Rollback
Helm tracks release history, enabling rollback to any previous revision.
Upgrade with new values
# Upgrade to a new environment preset or values
helm upgrade precinct charts/precinct/ \
-f charts/precinct/values-staging.yaml \
--post-renderer charts/precinct/post-renderer/kustomize-render.sh
# View release history
helm history precinct
Rollback
# Rollback to the previous revision
helm rollback precinct 1
# Rollback to a specific revision
helm rollback precinct 3
Persistent State
The following table shows what survives across upgrades and what must be re-created.
| Component | Survives Upgrade | Notes |
|---|---|---|
| SPIRE server data (PVC) | Yes | Registration entries and CA bundles persist |
| SPIKE Nexus SQLite (PVC) | Yes | AES-256-GCM encrypted secrets persist |
| KeyDB data (PVC) | Yes | Session and rate limit state persists |
| SPIRE join tokens | No | Regenerated on each deployment |
| SPIKE bootstrap shards | No | Re-delivered via bootstrap Job |
| OPA policies (ConfigMap) | Updated | Replaced with latest from chart |
Migration from Kustomize-Only
If you are already deploying PRECINCT with raw kustomize and want to adopt Helm, follow these steps:
-
Export current state: Run
kustomize build infra/eks/overlays/<env>/ > current-manifests.yamlto capture your current deployed manifests. -
Map values: Compare your kustomize patches with
the Helm
values.yaml. Most patches correspond to a values key (e.g.,spec.replicas: 3maps togateway.replicas: 3). -
Create values file: Write a
values-<env>.yamlthat captures your customizations. -
Dry-run install: Run
helm install precinct charts/precinct/ -f values-<env>.yaml --post-renderer charts/precinct/post-renderer/kustomize-render.sh --dry-runand diff the output againstcurrent-manifests.yaml. -
Install: Once the diff is clean, run the install
without
--dry-run. Helm will adopt the existing resources.
Existing PersistentVolumeClaims (SPIRE data, SPIKE data, KeyDB
data) are not automatically adopted by Helm. If you need Helm to
manage their lifecycle, annotate them with
meta.helm.sh/release-name and
meta.helm.sh/release-namespace before the first
helm install.