Kubernetes
WSO2 Integrator uses the Ballerina Code to Cloud feature to generate Kubernetes manifests directly from your source code. The cloud = "k8s" build option produces a Docker image and a complete set of Kubernetes YAML files in a single bal build invocation.
macOS users on Apple Silicon must set the following environment variable before building, because the Ballerina base Docker image does not yet support ARM:
export DOCKER_DEFAULT_PLATFORM=linux/amd64
This setting applies only to the current terminal session.
How Code to Cloud works
When you build with cloud = "k8s", the compiler generates Kubernetes manifests alongside the executable JAR:
├── Cloud.toml
├── Ballerina.toml
├── Config.toml
└── target/
├── bin/
│ └── <module>.jar
├── docker/
│ └── Dockerfile
└── kubernetes/
└── <module>-0.0.1.yaml
Cloud.toml overrides defaults that the compiler infers from your code. Every field is optional. The compiler provides sensible defaults when the file is absent or when a field is omitted. See the Cloud.toml reference for the full field list.
Config.toml is intentionally excluded from the container image because it can contain sensitive values. Use the [[cloud.config.files]] entry in Cloud.toml to mount it as a Kubernetes ConfigMap at runtime.
Step 1: Set the cloud target
Open Ballerina.toml and add the cloud build option:
[build-options]
cloud = "k8s"
Alternatively, pass the flag inline at build time without modifying Ballerina.toml:
bal build --cloud=k8s
Step 2: Configure the deployment
Create a Cloud.toml with container, resource, autoscaling, ConfigMap, and probe settings:
[container.image]
repository = "myorg"
name = "my-integration"
tag = "v1.0.0"
[cloud.deployment]
min_memory = "100Mi"
max_memory = "256Mi"
min_cpu = "500m"
max_cpu = "500m"
[cloud.deployment.autoscaling]
min_replicas = 2
max_replicas = 5
cpu = 60
[[cloud.config.files]]
file = "./Config.toml"
[cloud.deployment.probes.liveness]
port = 9091
path = "/probes/healthz"
[cloud.deployment.probes.readiness]
port = 9091
path = "/probes/readyz"
The [[cloud.config.files]] entry mounts your Config.toml as a Kubernetes ConfigMap, which is the recommended way to supply configuration to Kubernetes workloads.
The [cloud.deployment.probes.liveness] and [cloud.deployment.probes.readiness] sections only apply to long-running service workloads. Omit them if your integration is an Automation. For services, add a dedicated probe listener in your Ballerina code to back these endpoints:
import ballerina/http;
listener http:Listener probeEndpoint = new (9091);
service /probes on probeEndpoint {
resource function get healthz() returns boolean {
return true;
}
resource function get readyz() returns boolean {
return true;
}
}
Step 3: Build
bal build
The compiler generates all Kubernetes manifests and prints the kubectl apply command:
Generating artifacts...
@kubernetes:Service - complete 1/2
@kubernetes:Service - complete 2/2
@kubernetes:ConfigMap - complete 1/1
@kubernetes:Deployment - complete 1/1
@kubernetes:HPA - complete 1/1
@kubernetes:Docker - complete 2/2
Execute the below command to deploy the Kubernetes artifacts:
kubectl apply -f /path/to/project/target/kubernetes/my-integration
For a service-type workload, the generated manifest includes a Deployment, Service, ConfigMap, and HorizontalPodAutoscaler. The HorizontalPodAutoscaler is only generated when [cloud.deployment.autoscaling] is configured.
For Automations, the compiler generates a Job or CronJob resource instead of a Deployment, with no Service or HorizontalPodAutoscaler.
Step 4: Push the image
Push the built image to your container registry before applying the manifests:
docker push myorg/my-integration:v1.0.0
If you are using Minikube, run eval $(minikube docker-env) before bal build to build the image directly into the Minikube Docker daemon. You can then skip the push step.
Step 5: Deploy
kubectl apply -f target/kubernetes/my-integration/
Expected output for a service-type workload:
service/my-integration-svc created
configmap/config-config-map created
deployment.apps/my-integration-deployment created
horizontalpodautoscaler.autoscaling/my-integration-hpa created
Automations generate Job or CronJob resources instead of a Deployment, and do not produce a Service or HorizontalPodAutoscaler.
Step 6: Verify
kubectl get pods
kubectl get services
kubectl logs -f deployment/my-integration-deployment
Step 7: Expose and test
This step applies to service-type workloads that expose an HTTP endpoint. If your integration is an Automation or Event Listener, skip this step.
Expose the deployment via NodePort to access it in a development cluster:
kubectl expose deployment my-integration-deployment \
--type=NodePort \
--name=my-integration-svc-local
Get the assigned port:
kubectl get svc my-integration-svc-local
If you are using Minikube, get the cluster IP:
minikube ip
Then call the service:
curl http://<cluster-ip>:<node-port>/helloWorld/sayHello
Code to Cloud does not expose every Kubernetes configuration option. For changes beyond what Cloud.toml supports, use Kustomize to patch the generated YAML without modifying it directly. This keeps generated files untouched and makes upgrades easier when you rebuild.
What's next
- Docker — Build a Docker image from your project using Code to Cloud
- Red Hat OpenShift — Generate OpenShift manifests and deploy using the
ocCLI