Self-Hosting
The manager is your control plane — it stores releases, coordinates deployments, dispatches commands, and collects telemetry from every customer environment.
Run the Manager
The manager is available as a Docker image:
docker run -d \
-p 8080:8080 \
-v alien-data:/data \
-e BASE_URL=https://manager.example.com \
ghcr.io/alienplatform/alien-managerDeploy it wherever you run containers — ECS, Cloud Run, Kubernetes, a VM, anything. The only requirement is persistent disk for the SQLite database. On ECS, use EFS. On Kubernetes, use a PVC.
On first run, the manager generates an admin API key and prints it to stdout. Save it — you'll need it to configure the CLI.
You can also run the manager binary directly:
alien serveThis is useful for local development. It starts the manager on port 8080 with an embedded registry and a local SQLite database.
Configure the CLI
Point the CLI at your manager:
export ALIEN_MANAGER_URL=https://manager.example.com
export ALIEN_API_KEY=ax_admin_...What the Manager Does
- Stores releases — immutable snapshots of your built code, pushed via
alien release - Manages deployments — runs a deployment loop that pushes updates to customer environments
- Hosts an artifact registry — embedded OCI registry for container images (or connects to ECR/GAR/ACR)
- Dispatches commands — routes remote command invocations to the right deployment
- Collects telemetry — receives OpenTelemetry logs, metrics, and traces from deployed functions and forwards to your observability backend
- Manages tokens — API keys for authentication between the CLI, deployments, and the manager
Configuration
The manager is configured via alien-manager.toml. Generate a template:
alien serve --initOr mount a config file into the Docker container:
docker run -d \
-p 8080:8080 \
-v alien-data:/data \
-v ./alien-manager.toml:/app/alien-manager.toml \
-e BASE_URL=https://manager.example.com \
ghcr.io/alienplatform/alien-managerSee the full Configuration Reference for all options.
Cloud Artifact Registries
By default, the manager runs an embedded OCI registry. This works for pull-mode deployments (the agent pulls images over HTTPS). For push-mode deployments (AWS Lambda, GCP Cloud Run, Azure Container Apps), you need a cloud registry so the platform can pull images natively.
[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-artifacts"
pushRoleArn = "arn:aws:iam::123456789:role/ecr-push"Configure per-platform: [artifact-registry.aws], [artifact-registry.gcp], [artifact-registry.azure]. See Configuration Reference for details.
Cross-Account Impersonation
For push-mode deployments, the manager needs to call cloud APIs in the customer's environment. Configure a service identity that the manager can assume:
[impersonation.aws]
service = "awsiam"
roleName = "alien-management"
roleArn = "arn:aws:iam::123456789:role/alien-management"See Impersonation for how this works on each cloud, and Configuration Reference for the config format.
Commands Backend
The commands protocol needs a KV store and blob storage for command state and large payloads. By default, this uses the local filesystem. For production, use a cloud backend:
[commands]
kv = { service = "dynamodb", tableName = "alien-commands", region = "us-east-1" }
storage = { service = "s3", bucketName = "alien-command-storage" }See Configuration Reference for all backend options.
Telemetry
Forward OpenTelemetry data from deployed functions to your observability backend:
[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"
[telemetry.headers]
DD-API-KEY = "your-datadog-key"Works with any OTLP-compatible backend — Grafana, Datadog, Honeycomb, Jaeger, etc.
Provisioning Cloud Resources
For push-mode deployments, the manager needs cloud resources — an artifact registry, a commands backend, and an impersonation identity. We provide Terraform modules that create these resources for each cloud provider:
module "alien_infra" {
source = "github.com/aliendotdev/alien//infra/aws"
name = "my-project"
principal_arn = aws_iam_role.manager.arn
enable_artifact_registry = true
enable_commands_store = true
enable_impersonation = true
}The modules output structured config_values that map directly to alien-manager.toml sections. Available for AWS, GCP, and Azure.
These modules provision only the supporting resources — they do not deploy the manager itself. Run the manager wherever you like and point it at these resources via alien-manager.toml.
Production Checklist
- Persistent disk for the SQLite database (
/dataor configured path) - Set
base-urlto your public URL (required for agent communication and command dispatch) - Configure a cloud artifact registry for each platform you deploy to
- Configure impersonation for push-mode deployments
- Set up a cloud commands backend (DynamoDB + S3, Firestore + GCS, etc.)
- Configure telemetry to forward logs and traces
- Run behind a reverse proxy with TLS (the manager serves HTTP)
- Back up the SQLite database regularly