Configuration
The manager is configured via alien-manager.toml. Generate a template:
alien serve --initPlace the file in the working directory, or specify a path:
alien serve --config /etc/alien/manager.tomlConfiguration priority (lowest to highest): TOML defaults → TOML file values → environment variables → CLI flags.
Server
| Field | Type | Default | Description |
|---|---|---|---|
port | integer | 8080 | HTTP server port |
host | string | 0.0.0.0 | Bind address |
base-url | string | http://localhost:{port} | Public URL for this manager. Set this when running behind a reverse proxy or load balancer. |
releases-url | string | releases.alien.dev | Base URL for binary downloads (agent, deploy CLI) |
deployment-interval-secs | integer | 10 | How often the deployment loop runs (seconds) |
heartbeat-interval-secs | integer | 60 | Expected heartbeat interval from agents (seconds) |
[server]
port = 8080
host = "0.0.0.0"
base-url = "https://manager.example.com"
releases-url = "https://releases.alien.dev"
deployment-interval-secs = 10
heartbeat-interval-secs = 60Environment variable overrides: PORT, HOST, BASE_URL.
Database
| Field | Type | Default | Description |
|---|---|---|---|
path | string | alien-manager.db | SQLite database file path |
state-dir | string | .alien-manager | Directory for state files and local artifacts |
encryption-key | string | (none) | AEGIS-256 encryption key for sensitive data at rest. Generate with openssl rand -hex 32. |
[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"Artifact Registry
By default, the manager starts an embedded local OCI registry. This serves container images to pull-mode agents over HTTPS — no configuration needed.
For push-mode deployments (Lambda, Cloud Run, Container Apps), configure a cloud registry so the platform can pull images natively. You can set a default registry for all platforms, or override per-platform:
| Field | Type | Default | Description |
|---|---|---|---|
default | binding | (embedded local registry) | Default artifact registry for all platforms |
aws | binding | (none) | AWS-specific registry (ECR). Used for Lambda deployments. |
gcp | binding | (none) | GCP-specific registry (GAR). Used for Cloud Run deployments. |
azure | binding | (none) | Azure-specific registry (ACR). Used for Container Apps deployments. |
ECR (AWS)
[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-artifacts"
pullRoleArn = "arn:aws:iam::123456789:role/ecr-pull"
pushRoleArn = "arn:aws:iam::123456789:role/ecr-push"| Field | Type | Description |
|---|---|---|
repositoryPrefix | string | Prefix for ECR repository names |
pullRoleArn | string? | IAM role ARN for pull permissions |
pushRoleArn | string? | IAM role ARN for push+pull permissions |
GAR (GCP)
[artifact-registry.gcp]
service = "gar"
repositoryName = "projects/my-project/locations/us-central1/repositories/alien"
pullServiceAccountEmail = "pull@project.iam.gserviceaccount.com"
pushServiceAccountEmail = "push@project.iam.gserviceaccount.com"| Field | Type | Description |
|---|---|---|
repositoryName | string | Full Artifact Registry repository name |
pullServiceAccountEmail | string? | Service account email for pull permissions |
pushServiceAccountEmail | string? | Service account email for push+pull permissions |
ACR (Azure)
[artifact-registry.azure]
service = "acr"
registryName = "myregistry"
resourceGroupName = "rg-alien"| Field | Type | Description |
|---|---|---|
registryName | string | Azure Container Registry name (e.g., myregistry) |
resourceGroupName | string | Resource group where the registry is located |
Local (explicit)
Usually you don't need to set this — the embedded registry starts automatically. But if you want to control the URL or data directory:
[artifact-registry.default]
service = "local"
registryUrl = "localhost:5000"
dataDir = "/var/lib/alien/registry"Commands
Backend storage for the commands protocol. The KV store holds command state; the storage backend holds large request/response payloads.
Default: local filesystem in {state-dir}/commands_kv and {state-dir}/commands_storage.
For push-mode deployments (Lambda, Cloud Run), use cloud-backed storage so runtimes can access presigned URLs.
| Field | Type | Default | Description |
|---|---|---|---|
kv | binding | (local filesystem) | KV store for command state |
storage | binding | (local filesystem) | Blob storage for large command payloads |
DynamoDB + S3 (AWS)
[commands]
kv = { service = "dynamodb", tableName = "alien-commands", region = "us-east-1" }
storage = { service = "s3", bucketName = "alien-command-storage" }Firestore + GCS (GCP)
[commands]
kv = { service = "firestore", projectId = "my-project", databaseId = "(default)", collectionName = "alien-commands" }
storage = { service = "gcs", bucketName = "alien-command-storage" }Table Storage + Blob (Azure)
[commands]
kv = { service = "tablestorage", resourceGroupName = "rg-alien", accountName = "alienstate", tableName = "aliencommands" }
storage = { service = "blob", accountName = "alienstate", containerName = "alien-commands" }Impersonation
Cross-account credential impersonation for push-mode deployments. Each platform entry provides a service account identity that the manager assumes when deploying to remote environments.
| Field | Type | Default | Description |
|---|---|---|---|
aws | binding | (none) | AWS impersonation identity (IAM role for STS AssumeRole) |
gcp | binding | (none) | GCP impersonation identity (service account for token exchange) |
azure | binding | (none) | Azure impersonation identity (managed identity) |
AWS
[impersonation.aws]
service = "awsiam"
roleName = "alien-management"
roleArn = "arn:aws:iam::123456789:role/alien-management"GCP
[impersonation.gcp]
service = "gcpserviceaccount"
email = "alien-management@project.iam.gserviceaccount.com"
uniqueId = "123456789012345678"Azure
[impersonation.azure]
service = "azuremanagedidentity"
clientId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
resourceId = "/subscriptions/.../providers/Microsoft.ManagedIdentity/..."
principalId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"Telemetry
| Field | Type | Default | Description |
|---|---|---|---|
otlp-endpoint | string | (disabled) | OTLP HTTP endpoint for forwarding logs, traces, and metrics |
headers | map | (empty) | Custom HTTP headers sent with every OTLP request (for authentication) |
[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"
[telemetry.headers]
DD-API-KEY = "your-datadog-key"
Authorization = "Basic base64encoded"Environment variable override: OTLP_ENDPOINT.
The manager collects OpenTelemetry data from deployed functions and forwards it to the configured endpoint. Use any OTLP-compatible backend — Grafana, Datadog, Honeycomb, Jaeger, etc.
Example: Minimal Production Config
[server]
port = 8080
base-url = "https://manager.example.com"
[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"
[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"Example: AWS Push-Mode
[server]
base-url = "https://manager.example.com"
[database]
path = "/var/lib/alien/alien-manager.db"
state-dir = "/var/lib/alien/state"
encryption-key = "your-64-char-hex-key"
[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-artifacts"
pushRoleArn = "arn:aws:iam::123456789:role/ecr-push"
[commands]
kv = { service = "dynamodb", tableName = "alien-commands", region = "us-east-1" }
storage = { service = "s3", bucketName = "alien-command-storage" }
[impersonation.aws]
service = "awsiam"
roleName = "alien-management"
roleArn = "arn:aws:iam::123456789:role/alien-management"
[telemetry]
otlp-endpoint = "https://otel-collector.example.com:4318"Example: Local Development
For local testing, the defaults are usually sufficient. Just run:
alien serveThis starts the manager on port 8080 with an embedded registry and a local SQLite database.