Docs

Containers

Run containers inside the customer's cloud without managing a Kubernetes cluster per customer.

Some workloads don't fit the function model — they need persistent disks, long-lived connections, GPUs, or high replica counts. They're stateful databases, vector engines, AI gateways, log collectors, queues, or workflow runtimes.

Alien Containers gives you a managed container runtime that lives inside each customer's cloud, controlled from your own. You write one builder, and Alien provisions the right VMs, disks, registries, and load balancers per provider — without asking the customer to host another Kubernetes cluster for your product.

Why not Kubernetes per customer

In classic Kubernetes the control plane and the data plane live together. Every customer ends up with their own API server, scheduler, CNI, upgrade window, and operator runbook. At ten customers it's painful. At a thousand it's a job.

Alien splits the two:

  • Control plane runs in your environment. Orchestration, releases, scaling decisions, and metadata stay where your team already operates.
  • Data plane runs in their environment as plain VMs that can come and go. Disks and object storage stay in the customer account.
╔═ Your Cloud ════════════╗            ╔═ Customer's Cloud ═════════════════╗
║                         ║            ║                                    ║
║  Alien control plane    ║──cloud API─║──▶ ┏━━━━━━━━━━┓  ┏━━━━━━━━━━┓     ║
║  Orchestration          ║            ║    ┃   VM     ┃  ┃   VM     ┃     ║
║  Release graph          ║            ║    ┃ replica  ┃  ┃ replica  ┃     ║
║                         ║            ║    ┗━━━━━┯━━━━┛  ┗━━━━━┯━━━━┛     ║
║                         ║            ║          │             │           ║
║                         ║            ║       persistent disk / object     ║
║                         ║            ║       storage (customer-owned)     ║
╚═════════════════════════╝            ╚════════════════════════════════════╝

You scale to thousands of tenants the way you'd scale one product — one orchestrator version, one set of upgrade tooling, no per-customer cluster babysitting.

Declaring a container

Containers are declared like any other resource in alien.ts. Provide either a prebuilt image or a source directory, then attach storage, autoscaling, and links to other resources:

alien.ts
import * as alien from "@alienplatform/core"

const storage = new alien.Storage("vectors").build()

const writer = new alien.Container("writer")
  .code({ type: "image", image: "acme/vector-writer:v2" })
  .stateful(true)
  .persistentStorage({ size: "100Gi", mountPath: "/wal" })
  .link(storage)
  .build()

const reader = new alien.Container("reader")
  .code({ type: "image", image: "acme/vector-reader:v2" })
  .ephemeralStorage("500Gi")        // NVMe cache
  .autoscaling({ min: 2, max: 20 })
  .link(storage)
  .build()

export default new alien.Stack("vector-db")
  .add(storage, "frozen")
  .add(writer, "live")
  .add(reader, "live")
  .build()

The same builder works across AWS, GCP, and Azure. The platform picks the right VM family, disk type, registry, and load balancer per cloud — you don't write three deployment scripts.

Capabilities

CapabilityWhat it covers
Stateful containersLong-lived replicas with persistent disks for managed databases, queues, or stateful workers.
Ephemeral NVMeLocal SSD that disappears on restart — perfect for caches, read replicas, and zero-disk architectures.
Container autoscalingMin / max replicas, target utilisation. The data plane scales without operator intervention.
Machine autoscalingThe control plane provisions and tears down underlying VMs as replica demand changes.
GPU capacity groupsPin containers to GPU pools (for fine-tuning, inference, or model serving).
Custom imagesBring an existing OCI image. Source-built containers are packaged through the toolchain.
Built-in observabilityA small per-host collector publishes container health, logs, and HTTP signals — no per-pod sidecars to operate.

Permissions and ownership

The customer owns the data plane: the VMs, disks, and object storage live in their account, billed against their cloud commitments. Your control plane only holds the orchestration metadata it needs to schedule and update those containers.

Container resources participate in the same permissions model as the rest of Alien — declared once, derived into per-cloud IAM policies, reviewable by the customer's security team.

When to use containers vs functions

Use a function when the workload is short-lived, mostly stateless, and bursty. Functions are great for HTTP APIs, agent workers, scheduled jobs, and event handlers.

Use a container when you need:

  • Persistent local state (databases, queues, search indices)
  • Specialised hardware (GPUs, large NVMe, lots of RAM)
  • Long-running connections (websockets, streaming protocols)
  • Constant baseline traffic that benefits from autoscaling rather than per-invocation packaging

Both kinds of resources live in the same stack, share the same links, and ship through the same release process.

What's next

On this page