Docs

Configuration

The manager is configured via alien-manager.toml. Generate a template:

alien serve --init

Place the file in the working directory, or specify a path:

alien serve --config /etc/alien/manager.toml

Configuration priority (lowest to highest): TOML defaults → TOML file values → environment variables → CLI flags.

Server

FieldTypeDefaultDescription
portinteger8080HTTP server port
hoststring0.0.0.0Bind address
base-urlstringhttp://localhost:{port}Public URL for this manager. Set this when running behind a reverse proxy or load balancer.
releases-urlstringreleases.alien.devBase URL for binary downloads (agent, deploy CLI)
deployment-interval-secsinteger10How often the deployment loop runs (seconds)
heartbeat-interval-secsinteger60Expected heartbeat interval from agents (seconds)
[server]
port = 8080
host = "0.0.0.0"
base-url = "https://manager.example.com"
releases-url = "https://releases.alien.dev"
deployment-interval-secs = 10
heartbeat-interval-secs = 60

Environment variable overrides: PORT, HOST, BASE_URL.

Database

FieldTypeDefaultDescription
pathstringalien-manager.dbSQLite database file path
state-dirstring.alien-managerDirectory for state files and local artifacts
encryption-keystring(none)AEGIS-256 encryption key for sensitive data at rest. Generate with openssl rand -hex 32.
[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"

Artifact Registry

By default, the manager starts an embedded local OCI registry. This serves container images to pull-mode agents over HTTPS — no configuration needed.

For push-mode deployments (Lambda, Cloud Run, Container Apps), configure a cloud registry so the platform can pull images natively. You can set a default registry for all platforms, or override per-platform:

FieldTypeDefaultDescription
defaultbinding(embedded local registry)Default artifact registry for all platforms
awsbinding(none)AWS-specific registry (ECR). Used for Lambda deployments.
gcpbinding(none)GCP-specific registry (GAR). Used for Cloud Run deployments.
azurebinding(none)Azure-specific registry (ACR). Used for Container Apps deployments.

ECR (AWS)

[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-artifacts"
pullRoleArn = "arn:aws:iam::123456789:role/ecr-pull"
pushRoleArn = "arn:aws:iam::123456789:role/ecr-push"
FieldTypeDescription
repositoryPrefixstringPrefix for ECR repository names
pullRoleArnstring?IAM role ARN for pull permissions
pushRoleArnstring?IAM role ARN for push+pull permissions

GAR (GCP)

[artifact-registry.gcp]
service = "gar"
repositoryName = "projects/my-project/locations/us-central1/repositories/alien"
pullServiceAccountEmail = "pull@project.iam.gserviceaccount.com"
pushServiceAccountEmail = "push@project.iam.gserviceaccount.com"
FieldTypeDescription
repositoryNamestringFull Artifact Registry repository name
pullServiceAccountEmailstring?Service account email for pull permissions
pushServiceAccountEmailstring?Service account email for push+pull permissions

ACR (Azure)

[artifact-registry.azure]
service = "acr"
registryName = "myregistry"
resourceGroupName = "rg-alien"
FieldTypeDescription
registryNamestringAzure Container Registry name (e.g., myregistry)
resourceGroupNamestringResource group where the registry is located

Local (explicit)

Usually you don't need to set this — the embedded registry starts automatically. But if you want to control the URL or data directory:

[artifact-registry.default]
service = "local"
registryUrl = "localhost:5000"
dataDir = "/var/lib/alien/registry"

Commands

Backend storage for the commands protocol. The KV store holds command state; the storage backend holds large request/response payloads.

Default: local filesystem in {state-dir}/commands_kv and {state-dir}/commands_storage.

For push-mode deployments (Lambda, Cloud Run), use cloud-backed storage so runtimes can access presigned URLs.

FieldTypeDefaultDescription
kvbinding(local filesystem)KV store for command state
storagebinding(local filesystem)Blob storage for large command payloads

DynamoDB + S3 (AWS)

[commands]
kv = { service = "dynamodb", tableName = "alien-commands", region = "us-east-1" }
storage = { service = "s3", bucketName = "alien-command-storage" }

Firestore + GCS (GCP)

[commands]
kv = { service = "firestore", projectId = "my-project", databaseId = "(default)", collectionName = "alien-commands" }
storage = { service = "gcs", bucketName = "alien-command-storage" }

Table Storage + Blob (Azure)

[commands]
kv = { service = "tablestorage", resourceGroupName = "rg-alien", accountName = "alienstate", tableName = "aliencommands" }
storage = { service = "blob", accountName = "alienstate", containerName = "alien-commands" }

Impersonation

Cross-account credential impersonation for push-mode deployments. Each platform entry provides a service account identity that the manager assumes when deploying to remote environments.

FieldTypeDefaultDescription
awsbinding(none)AWS impersonation identity (IAM role for STS AssumeRole)
gcpbinding(none)GCP impersonation identity (service account for token exchange)
azurebinding(none)Azure impersonation identity (managed identity)

AWS

[impersonation.aws]
service = "awsiam"
roleName = "alien-management"
roleArn = "arn:aws:iam::123456789:role/alien-management"

GCP

[impersonation.gcp]
service = "gcpserviceaccount"
email = "alien-management@project.iam.gserviceaccount.com"
uniqueId = "123456789012345678"

Azure

[impersonation.azure]
service = "azuremanagedidentity"
clientId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
resourceId = "/subscriptions/.../providers/Microsoft.ManagedIdentity/..."
principalId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Telemetry

FieldTypeDefaultDescription
otlp-endpointstring(disabled)OTLP HTTP endpoint for forwarding logs, traces, and metrics
headersmap(empty)Custom HTTP headers sent with every OTLP request (for authentication)
[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"

[telemetry.headers]
DD-API-KEY = "your-datadog-key"
Authorization = "Basic base64encoded"

Environment variable override: OTLP_ENDPOINT.

The manager collects OpenTelemetry data from deployed functions and forwards it to the configured endpoint. Use any OTLP-compatible backend — Grafana, Datadog, Honeycomb, Jaeger, etc.

Example: Minimal Production Config

[server]
port = 8080
base-url = "https://manager.example.com"

[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"

[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"

Example: AWS Push-Mode

[server]
base-url = "https://manager.example.com"

[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"

[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-artifacts"
pushRoleArn = "arn:aws:iam::123456789:role/ecr-push"

[commands]
kv = { service = "dynamodb", tableName = "alien-commands", region = "us-east-1" }
storage = { service = "s3", bucketName = "alien-command-storage" }

[impersonation.aws]
service = "awsiam"
roleName = "alien-management"
roleArn = "arn:aws:iam::123456789:role/alien-management"

[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"

Example: Local Development

For local testing, the defaults are usually sufficient. Just run:

alien serve

This starts the manager on port 8080 with an embedded registry and a local SQLite database.

On this page