Multi-Cloud Container Publishing with Dagger Functions
September 17, 2024
Sep 17, 2024
Publishing container images to different cloud registries should be easy...but in practice, it isn't. There are differences in cloud provider APIs, security policies, access controls, and infrastructure. These differences add friction to the process of building multi-cloud application delivery pipelines.
In a Dagger community call, Luke Marsden demonstrated two new Dagger modules designed to ease this friction. The aws-for-dagger and gcp-for-dagger modules provide simple APIs to push container images to AWS Elastic Container Registry (ECR) and GCP Artifact Registry (GAR). They also automate many of the steps related to authentication and permissions management.
These modules are available in the Daggerverse and you can try them for yourself. You will need:
A Google Cloud Platform account credentials database (generated via
gcloud auth login
and usually at~/.config/gcloud/credentials.db
)This account should have permissions to create new service accounts, create keys for them, and assign IAM policies for the project. A simple set of roles that fulfills this is:
Project IAM Admin
,Service Account Admin
,Service Account Key Admin
An Amazon Web Services account credentials file with an access key ID and secret access key (usually at
~/.aws/credentials
)This account should have one or more IAM policies that allow writing to ECR repositories, such as AmazonEC2ContainerRegistryFullAccess;
Existing repositories in Google Artifact Registry and AWS Elastic Container Registry.
Here's an example of a Dagger Function that pushes the ubuntu:latest
container image to GAR:
dagger call -m github.com/jpadams/gcp-for-dagger@v0.2.0
gar-push-example \
--region=YOUR-GAR-REPOSITORY-REGION \
--repo=YOUR-GAR-REPOSITORY-NAME \
--project=YOUR-GCP-PROJECT-ID \
--account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--image
Here's an example of another Dagger Function that does the same, this time to ECR:
dagger call -m github.com/jpadams/aws-for-dagger@v0.1.4 ecr-push-example
--region=YOUR-ECR-REPOSITORY-REGION \
--repo=YOUR-ECR-REPOSITORY-NAME \
--aws-account-id=YOUR-AWS-ACCOUNT-ID \
--aws-credentials=YOUR-PATH-TO/.aws/credentials
The power of Dagger Functions is that you can reuse and combine them as you like. So, you could reuse these Dagger Functions and create your own multi-cloud publishing Dagger Function, to push a container image to AWS and ECR (and, just for fun, Docker Hub too) in a single operation.
Here's how:
# initialize a new module
dagger init --name=multipush --sdk=typescript
# install the aws module
dagger install github.com/jpadams/aws-for-dagger@v0.1.5
# install the gcp module
Edit the generated dagger/src/index.ts
file and create a Dagger Function that uses these modules:
import {
dag,
Secret,
File,
object,
func,
} from "@dagger.io/dagger";
@object()
class Multipush {
/**
* Returns addresses of pushed containers
*/
@func()
async multipush(
gcpAccount: string,
gcpRegion: string,
gcpProject: string,
gcpRepo: string,
gcpCredentials: File,
awsAccount: string,
awsRegion: string,
awsRepo: string,
awsCredentials: File,
dockerAccount: string,
dockerCredentials: Secret,
imageName: string
): Promise<string[]> {
const c = dag
.container()
.from("alpine:latest")
.withEntrypoint(["echo", "Hello from Dagger!"]);
const addr: string[] = [];
const gcpAddr = await dag
.gcp()
.garPush(
c,
gcpAccount,
gcpRegion,
gcpProject,
gcpRepo,
imageName,
gcpCredentials
);
addr.push(gcpAddr);
const awsAddr = await dag
.aws()
.ecrPush(awsCredentials, awsRegion, awsAccount, awsRepo, c);
addr.push(awsAddr);
const dockerAddr = await c
.withRegistryAuth("docker.io", dockerAccount, dockerCredentials)
.publish(`docker.io/${dockerAccount}/${imageName}`);
addr.push(dockerAddr);
return addr;
}
}
Call your new Dagger Function:
dagger call multipush \
--gcp-region=YOUR-GAR-REPOSITORY-REGION \
--gcp-project=YOUR-GCP-PROJECT-ID \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--gcp-account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-repo=YOUR-GAR-REPOSITORY-NAME \
--aws-region=YOUR-ECR-REPOSITORY-REGION \
--aws-credentials=YOUR-PATH-TO/.aws/credentials \
--aws-account=YOUR-AWS-ACCOUNT-ID \
--aws-repo=YOUR-ECR-REPOSITORY-NAME \
--docker-account=YOUR-DOCKER-HUB-USERNAME \
--docker-credentials=env:PASSWORD \
--image-name=test
As you can see, this Dagger Function does quite a lot of work, in not very many lines of code. By reusing existing modules in combination with the Dagger API, it enables you to publish a container image simultaneously to three different cloud registries in a single call.
This example also demonstrates how Dagger Functions can be used from any language. Even though the AWS and GCP Dagger Functions are written in Go, the higher-level Dagger Function above is able to call them natively from TypeScript. This is because each Dagger SDK generates native code-bindings for all dependencies, which abstract away the underlying GraphQL queries. This gives you all the benefits of type-checking, code completion and other IDE features when developing Dagger Functions.
Catch up with Luke's demonstration in the video below, and then try it out for yourself!
Publishing container images to different cloud registries should be easy...but in practice, it isn't. There are differences in cloud provider APIs, security policies, access controls, and infrastructure. These differences add friction to the process of building multi-cloud application delivery pipelines.
In a Dagger community call, Luke Marsden demonstrated two new Dagger modules designed to ease this friction. The aws-for-dagger and gcp-for-dagger modules provide simple APIs to push container images to AWS Elastic Container Registry (ECR) and GCP Artifact Registry (GAR). They also automate many of the steps related to authentication and permissions management.
These modules are available in the Daggerverse and you can try them for yourself. You will need:
A Google Cloud Platform account credentials database (generated via
gcloud auth login
and usually at~/.config/gcloud/credentials.db
)This account should have permissions to create new service accounts, create keys for them, and assign IAM policies for the project. A simple set of roles that fulfills this is:
Project IAM Admin
,Service Account Admin
,Service Account Key Admin
An Amazon Web Services account credentials file with an access key ID and secret access key (usually at
~/.aws/credentials
)This account should have one or more IAM policies that allow writing to ECR repositories, such as AmazonEC2ContainerRegistryFullAccess;
Existing repositories in Google Artifact Registry and AWS Elastic Container Registry.
Here's an example of a Dagger Function that pushes the ubuntu:latest
container image to GAR:
dagger call -m github.com/jpadams/gcp-for-dagger@v0.2.0
gar-push-example \
--region=YOUR-GAR-REPOSITORY-REGION \
--repo=YOUR-GAR-REPOSITORY-NAME \
--project=YOUR-GCP-PROJECT-ID \
--account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--image
Here's an example of another Dagger Function that does the same, this time to ECR:
dagger call -m github.com/jpadams/aws-for-dagger@v0.1.4 ecr-push-example
--region=YOUR-ECR-REPOSITORY-REGION \
--repo=YOUR-ECR-REPOSITORY-NAME \
--aws-account-id=YOUR-AWS-ACCOUNT-ID \
--aws-credentials=YOUR-PATH-TO/.aws/credentials
The power of Dagger Functions is that you can reuse and combine them as you like. So, you could reuse these Dagger Functions and create your own multi-cloud publishing Dagger Function, to push a container image to AWS and ECR (and, just for fun, Docker Hub too) in a single operation.
Here's how:
# initialize a new module
dagger init --name=multipush --sdk=typescript
# install the aws module
dagger install github.com/jpadams/aws-for-dagger@v0.1.5
# install the gcp module
Edit the generated dagger/src/index.ts
file and create a Dagger Function that uses these modules:
import {
dag,
Secret,
File,
object,
func,
} from "@dagger.io/dagger";
@object()
class Multipush {
/**
* Returns addresses of pushed containers
*/
@func()
async multipush(
gcpAccount: string,
gcpRegion: string,
gcpProject: string,
gcpRepo: string,
gcpCredentials: File,
awsAccount: string,
awsRegion: string,
awsRepo: string,
awsCredentials: File,
dockerAccount: string,
dockerCredentials: Secret,
imageName: string
): Promise<string[]> {
const c = dag
.container()
.from("alpine:latest")
.withEntrypoint(["echo", "Hello from Dagger!"]);
const addr: string[] = [];
const gcpAddr = await dag
.gcp()
.garPush(
c,
gcpAccount,
gcpRegion,
gcpProject,
gcpRepo,
imageName,
gcpCredentials
);
addr.push(gcpAddr);
const awsAddr = await dag
.aws()
.ecrPush(awsCredentials, awsRegion, awsAccount, awsRepo, c);
addr.push(awsAddr);
const dockerAddr = await c
.withRegistryAuth("docker.io", dockerAccount, dockerCredentials)
.publish(`docker.io/${dockerAccount}/${imageName}`);
addr.push(dockerAddr);
return addr;
}
}
Call your new Dagger Function:
dagger call multipush \
--gcp-region=YOUR-GAR-REPOSITORY-REGION \
--gcp-project=YOUR-GCP-PROJECT-ID \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--gcp-account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-repo=YOUR-GAR-REPOSITORY-NAME \
--aws-region=YOUR-ECR-REPOSITORY-REGION \
--aws-credentials=YOUR-PATH-TO/.aws/credentials \
--aws-account=YOUR-AWS-ACCOUNT-ID \
--aws-repo=YOUR-ECR-REPOSITORY-NAME \
--docker-account=YOUR-DOCKER-HUB-USERNAME \
--docker-credentials=env:PASSWORD \
--image-name=test
As you can see, this Dagger Function does quite a lot of work, in not very many lines of code. By reusing existing modules in combination with the Dagger API, it enables you to publish a container image simultaneously to three different cloud registries in a single call.
This example also demonstrates how Dagger Functions can be used from any language. Even though the AWS and GCP Dagger Functions are written in Go, the higher-level Dagger Function above is able to call them natively from TypeScript. This is because each Dagger SDK generates native code-bindings for all dependencies, which abstract away the underlying GraphQL queries. This gives you all the benefits of type-checking, code completion and other IDE features when developing Dagger Functions.
Catch up with Luke's demonstration in the video below, and then try it out for yourself!
Publishing container images to different cloud registries should be easy...but in practice, it isn't. There are differences in cloud provider APIs, security policies, access controls, and infrastructure. These differences add friction to the process of building multi-cloud application delivery pipelines.
In a Dagger community call, Luke Marsden demonstrated two new Dagger modules designed to ease this friction. The aws-for-dagger and gcp-for-dagger modules provide simple APIs to push container images to AWS Elastic Container Registry (ECR) and GCP Artifact Registry (GAR). They also automate many of the steps related to authentication and permissions management.
These modules are available in the Daggerverse and you can try them for yourself. You will need:
A Google Cloud Platform account credentials database (generated via
gcloud auth login
and usually at~/.config/gcloud/credentials.db
)This account should have permissions to create new service accounts, create keys for them, and assign IAM policies for the project. A simple set of roles that fulfills this is:
Project IAM Admin
,Service Account Admin
,Service Account Key Admin
An Amazon Web Services account credentials file with an access key ID and secret access key (usually at
~/.aws/credentials
)This account should have one or more IAM policies that allow writing to ECR repositories, such as AmazonEC2ContainerRegistryFullAccess;
Existing repositories in Google Artifact Registry and AWS Elastic Container Registry.
Here's an example of a Dagger Function that pushes the ubuntu:latest
container image to GAR:
dagger call -m github.com/jpadams/gcp-for-dagger@v0.2.0
gar-push-example \
--region=YOUR-GAR-REPOSITORY-REGION \
--repo=YOUR-GAR-REPOSITORY-NAME \
--project=YOUR-GCP-PROJECT-ID \
--account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--image
Here's an example of another Dagger Function that does the same, this time to ECR:
dagger call -m github.com/jpadams/aws-for-dagger@v0.1.4 ecr-push-example
--region=YOUR-ECR-REPOSITORY-REGION \
--repo=YOUR-ECR-REPOSITORY-NAME \
--aws-account-id=YOUR-AWS-ACCOUNT-ID \
--aws-credentials=YOUR-PATH-TO/.aws/credentials
The power of Dagger Functions is that you can reuse and combine them as you like. So, you could reuse these Dagger Functions and create your own multi-cloud publishing Dagger Function, to push a container image to AWS and ECR (and, just for fun, Docker Hub too) in a single operation.
Here's how:
# initialize a new module
dagger init --name=multipush --sdk=typescript
# install the aws module
dagger install github.com/jpadams/aws-for-dagger@v0.1.5
# install the gcp module
Edit the generated dagger/src/index.ts
file and create a Dagger Function that uses these modules:
import {
dag,
Secret,
File,
object,
func,
} from "@dagger.io/dagger";
@object()
class Multipush {
/**
* Returns addresses of pushed containers
*/
@func()
async multipush(
gcpAccount: string,
gcpRegion: string,
gcpProject: string,
gcpRepo: string,
gcpCredentials: File,
awsAccount: string,
awsRegion: string,
awsRepo: string,
awsCredentials: File,
dockerAccount: string,
dockerCredentials: Secret,
imageName: string
): Promise<string[]> {
const c = dag
.container()
.from("alpine:latest")
.withEntrypoint(["echo", "Hello from Dagger!"]);
const addr: string[] = [];
const gcpAddr = await dag
.gcp()
.garPush(
c,
gcpAccount,
gcpRegion,
gcpProject,
gcpRepo,
imageName,
gcpCredentials
);
addr.push(gcpAddr);
const awsAddr = await dag
.aws()
.ecrPush(awsCredentials, awsRegion, awsAccount, awsRepo, c);
addr.push(awsAddr);
const dockerAddr = await c
.withRegistryAuth("docker.io", dockerAccount, dockerCredentials)
.publish(`docker.io/${dockerAccount}/${imageName}`);
addr.push(dockerAddr);
return addr;
}
}
Call your new Dagger Function:
dagger call multipush \
--gcp-region=YOUR-GAR-REPOSITORY-REGION \
--gcp-project=YOUR-GCP-PROJECT-ID \
--gcp-credentials=YOUR-PATH-TO/.config/gcloud/credentials.db \
--gcp-account=YOUR-GCP-ACCOUNT-EMAIL-ADDRESS \
--gcp-repo=YOUR-GAR-REPOSITORY-NAME \
--aws-region=YOUR-ECR-REPOSITORY-REGION \
--aws-credentials=YOUR-PATH-TO/.aws/credentials \
--aws-account=YOUR-AWS-ACCOUNT-ID \
--aws-repo=YOUR-ECR-REPOSITORY-NAME \
--docker-account=YOUR-DOCKER-HUB-USERNAME \
--docker-credentials=env:PASSWORD \
--image-name=test
As you can see, this Dagger Function does quite a lot of work, in not very many lines of code. By reusing existing modules in combination with the Dagger API, it enables you to publish a container image simultaneously to three different cloud registries in a single call.
This example also demonstrates how Dagger Functions can be used from any language. Even though the AWS and GCP Dagger Functions are written in Go, the higher-level Dagger Function above is able to call them natively from TypeScript. This is because each Dagger SDK generates native code-bindings for all dependencies, which abstract away the underlying GraphQL queries. This gives you all the benefits of type-checking, code completion and other IDE features when developing Dagger Functions.
Catch up with Luke's demonstration in the video below, and then try it out for yourself!