From Local to AWS
Deploy the quickstart worker into a real AWS account — Lambda, S3, and IAM, all from one command.
In the quickstart, you built an AI worker and tested it locally. Now let's deploy the same code into a real AWS account.
Deploying to a customer's cloud is a two-step process with two people involved:
- You (the developer) build your code and create a release on your Alien server
- The customer's admin runs a single setup command in their AWS account — this creates the infrastructure and deploys your worker
After that, the admin is done. You push updates and send commands from your server — the customer never needs to do anything again.
For this guide, you'll play both roles using your own AWS account.
Start a tunnel
The Alien server needs to be reachable over HTTPS. Install cloudflared and start a tunnel. (In production, you'd deploy the server to the cloud.)
cloudflared tunnel --url http://localhost:8080Copy the tunnel URL (e.g. https://something-random.trycloudflare.com).
Configure and start the server
Create an ECR repository to store your worker's container images. This lives in your account (not the customer's):
aws ecr create-repository --repository-name alien-quickstartGenerate a config file and edit it:
alien serve --init > alien-manager.toml[server]
port = 8080
base-url = "https://something-random.trycloudflare.com" # your tunnel URL
[artifact-registry.aws]
service = "ecr"
repositoryPrefix = "alien-quickstart" # the ECR repo you just createdStart the server:
alien serveSave the admin API key it prints — you'll need it next.
Build and release
Open a new terminal tab and point the CLI at your server:
export ALIEN_MANAGER_URL=https://something-random.trycloudflare.com
export ALIEN_API_KEY=ax_admin_...Create a release (Alien auto-detects platforms from your manager config and builds if needed):
alien releaseGenerate a token for the customer
You onboard each customer by generating a deployment token. In production, you'd send this to the customer's cloud admin:
alien onboard acme-corp> Success! Ready to deploy.
Customer acme-corp
Token ax_dg_abc123...
Share with the customer's admin:
curl -fsSL https://something-random.trycloudflare.com/install | bash
alien-deploy up \
--token ax_dg_abc123... \
--name acme-corp \
--platform aws \
--manager-url https://something-random.trycloudflare.comThe customer deploys
In production, the customer's admin runs these commands in their AWS account. They install the alien-deploy CLI, then run the setup command you gave them. This creates all the infrastructure and deploys your worker — one time, and they're done.
Since you're playing both roles, run it yourself:
# Install the customer CLI (the manager serves this)
curl -fsSL https://something-random.trycloudflare.com/install | bash
# Deploy into the AWS account
# Since the manager is running on this machine, use localhost for faster setup.
# In production, the customer would use the public URL.
alien-deploy up \
--token ax_dg_abc123... \
--platform aws \
--name acme-corp \
--manager-url http://localhost:8080This provisions Lambda (your worker), S3 (file storage), and IAM roles (least-privilege, auto-generated from your stack definition) in the customer's AWS account.
Go back to the alien serve terminal tab. The manager is coordinating the deployment — you should see it updating in real time:
╭─ acme-corp · AWS ────── ◐ provisioning ─╮
│ worker provisioning │
│ files queued │
╰─────────────────────────────────────────╯Wait until it shows running:
╭─ acme-corp · AWS ────────── ● running ─╮
│ worker ready │
│ files ready │
╰─────────────────────────────────────────╯You can also check deployment status programmatically (useful for CI/CD or AI agents):
alien deployments lsSend a command
Go back to the CLI terminal tab. Send a command to the worker running in the customer's cloud:
alien commands invoke \
--deployment acme-corp \
--command execute-tool \
--params '{"tool": "write-file", "params": {"path": "hello.txt", "content": "Hello from AWS!"}}'{ "written": true, "path": "hello.txt" }Same command you used in local dev — but this time it executed on Lambda, writing to a real S3 bucket in the customer's account.
Push an update
This is where it clicks. Remember alien dev release from local dev? This is the production version.
Change your worker code — add a new tool to src/index.ts:
"list-files": {
description: "List all files in the customer's workspace",
execute: async () => {
const store = await storage("files")
const files = []
for await (const obj of store.list()) {
files.push(obj.location)
}
return { files }
},
},Now release it:
alien releaseGo back to the alien serve terminal tab — you should see it picking up the new release and updating the deployment:
╭─ acme-corp · AWS ────────── ◐ updating ─╮
│ worker updating │
│ files ready │
╰─────────────────────────────────────────╯Once it's back to running, the customer's Lambda function is updated — no redeployment, no customer involvement, no downtime. Go back to the CLI terminal tab and try the new tool:
alien commands invoke \
--deployment acme-corp \
--command execute-tool \
--params '{"tool": "list-files", "params": {}}'{ "files": ["hello.txt"] }You changed code on your machine, ran one command, and the worker running in the customer's AWS account was updated.
Clean up
alien-deploy down --name acme-corpThis removes all AWS resources created for this deployment.
What's next
You deployed code into a customer's cloud, sent commands to it, and pushed a live update — all without the customer doing anything after the initial setup. The same stack deploys to GCP and Azure with no code changes.