kpack 0.4.0: Kubernetes-Native Container Builds with Modern APIs

K8s Guru
5 min read
kpack 0.4.0: Kubernetes-Native Container Builds with Modern APIs

Introduction

Container image builds are where “source code” becomes “deployable artifact.” Once you’re building images in Kubernetes, you want declarative builds, reproducible outputs, and security guarantees that don’t require maintaining separate CI/CD infrastructure.

kpack 0.4.0, released on April 15, 2021, strengthened the Kubernetes-native build service with a modern API version, Cosign-based image signing, remote registry caching, and better control over build pod scheduling—making container builds more secure, efficient, and integrated with Kubernetes workflows.

Why this matters in practice

  • Modern API surface: v1alpha2 API provides better abstractions and extensibility for build configurations.
  • Security by default: Cosign integration enables image signing without external tooling.
  • Build efficiency: Remote registry caching reduces build times and network usage.
  • Scheduling control: Node affinity and tolerations enable builds on specialized nodes.

v1alpha2 API Introduction

kpack 0.4.0 introduced the v1alpha2 API version, providing a more modern and extensible API surface alongside the existing v1alpha1 API.

What changed:

  • Enhanced resource definitions with better separation of concerns
  • Improved status reporting with more detailed build information
  • Extended configuration options for build customization
  • Backward compatibility with v1alpha1 resources

Example v1alpha2 Image resource:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
  namespace: default
spec:
  tag: registry.example.com/my-app
  serviceAccount: builder
  builder:
    name: my-builder
    kind: Builder
  source:
    git:
      url: https://github.com/example/my-app
      revision: main
  build:
    env:
    - name: BP_JAVA_VERSION
      value: "17"

Cosign Image Signing

kpack 0.4.0 replaced Notary with Cosign for image signing, providing a more modern and Kubernetes-native approach to image security.

Benefits:

  • Kubernetes-native: Cosign integrates better with Kubernetes workflows
  • Simplified setup: No separate Notary server required
  • Better tooling: Cosign CLI and libraries are actively maintained
  • OCI compliance: Uses OCI-compatible signing standards

Configuring Cosign signing:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  # Cosign signing configuration
  cosign:
    enabled: true
    keyRef:
      name: cosign-key
      namespace: default
  # ... rest of spec

Cosign key secret:

apiVersion: v1
kind: Secret
metadata:
  name: cosign-key
type: Opaque
data:
  # Base64 encoded Cosign private key
  cosign.key: <base64-encoded-key>

Remote Registry Caching

kpack 0.4.0 introduced remote registry caching, enabling buildpacks to cache layers in container registries instead of only using local cache.

Benefits:

  • Faster builds: Shared cache across multiple build pods
  • Reduced network usage: Fewer layer downloads
  • Persistent cache: Cache survives pod restarts
  • Multi-cluster sharing: Cache can be shared across clusters

Configuring remote cache:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  cache:
    registry:
      tag: registry.example.com/my-app-cache
  # ... rest of spec

Cache registry authentication:

apiVersion: v1
kind: Secret
metadata:
  name: cache-registry-credentials
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: <base64-encoded-docker-config>

Service Binding Specification

kpack 0.4.0 added support for the Service Binding Specification, enabling buildpacks to access service credentials and configuration during builds.

Use cases:

  • Database connections: Access database credentials during build
  • API keys: Inject API keys for build-time API calls
  • Configuration: Provide service-specific configuration
  • Secrets management: Integrate with external secret management

Service binding example:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  serviceBindings:
  - name: database
    kind: Secret
    apiVersion: v1
  # ... rest of spec

Service binding secret:

apiVersion: v1
kind: Secret
metadata:
  name: database
type: servicebinding.io/postgresql
stringData:
  type: postgresql
  host: database.example.com
  port: "5432"
  database: myapp
  username: appuser
  password: secretpassword

Build Pod Scheduling

kpack 0.4.0 added node affinity and tolerations support for build pods, enabling better control over where builds execute.

Node affinity example:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  build:
    nodeSelector:
      kubernetes.io/arch: amd64
    affinity:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: node-type
              operator: In
              values:
              - build
  # ... rest of spec

Tolerations example:

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  build:
    tolerations:
    - key: build-workload
      operator: Equal
      value: "true"
      effect: NoSchedule
  # ... rest of spec

Use cases:

  • GPU builds: Schedule builds on GPU-enabled nodes
  • High-memory builds: Use nodes with more memory
  • Dedicated build nodes: Isolate builds from application workloads
  • Cost optimization: Use spot instances for builds

Getting Started

Install kpack

# Add kpack repository
kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.4.0/release-0.4.0.yaml

# Verify installation
kubectl get pods -n kpack

Create a Builder

apiVersion: kpack.io/v1alpha2
kind: Builder
metadata:
  name: my-builder
spec:
  serviceAccount: builder
  tag: registry.example.com/my-builder
  stack:
    name: base
    kind: ClusterStack
  store:
    name: default
    kind: ClusterStore
  order:
  - group:
    - id: paketo-buildpacks/java

Create an Image

apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  serviceAccount: builder
  builder:
    name: my-builder
    kind: Builder
  source:
    git:
      url: https://github.com/example/my-app
      revision: main

Monitor Builds

# Watch image builds
kubectl get images -w

# Check build logs
kubectl logs -f <build-pod-name>

# Get build status
kubectl describe image my-app

Migration from v1alpha1

kpack 0.4.0 maintains backward compatibility with v1alpha1 resources. To migrate:

  1. Update API version in resource definitions
  2. Review new features like Cosign signing and remote caching
  3. Update build configurations to use new v1alpha2 features
  4. Test builds to ensure compatibility

Migration example:

# Before (v1alpha1)
apiVersion: kpack.io/v1alpha1
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  # ... spec

# After (v1alpha2)
apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
  name: my-app
spec:
  tag: registry.example.com/my-app
  # ... spec with new features

Summary

AspectDetails
Release DateApril 15, 2021
Headline Featuresv1alpha2 API, Cosign signing, remote registry caching, service binding spec, build pod scheduling
Why it MattersDelivers modern, secure, and efficient Kubernetes-native container builds with better API design and security features

kpack 0.4.0 marked a significant milestone in Kubernetes-native container builds, introducing modern APIs, improved security with Cosign, and better build efficiency through remote caching. The v1alpha2 API provided a foundation for future enhancements, while Cosign integration brought image signing into the Kubernetes-native workflow.

For teams building container images in Kubernetes, kpack 0.4.0 provided a more secure, efficient, and integrated build experience. The combination of modern APIs, Cosign signing, remote caching, and flexible scheduling made kpack a compelling choice for organizations looking to build containers without maintaining separate CI/CD infrastructure.