Deploying stateless Spring Boot microservices can feel like a chore when you have to juggle containers, orchestrators, and cloud APIs. This guide shows how to streamline the entire process with a single click, leveraging AWS Fargate for serverless container execution and Google Kubernetes Engine (GKE) for managed Kubernetes clusters. By the end, you’ll be able to push a Spring Boot jar to a Docker registry, have it automatically build an image, and launch it on both cloud platforms without writing any deployment scripts.
1. Architecture Overview
- Source Control – Git repository hosting the Spring Boot project.
- CI/CD Pipeline – GitHub Actions (or GitLab CI) that builds, tests, and pushes Docker images.
- Container Registry – Amazon ECR for Fargate and Google Container Registry (GCR) for GKE.
- Orchestration Layer – AWS Fargate for serverless container runs, GKE for Kubernetes deployments.
- Service Mesh / Ingress – AWS ALB Ingress Controller or GKE Ingress with Istio for traffic routing.
- Observability – CloudWatch on AWS, Cloud Monitoring on GCP, and Prometheus/Grafana across both.
With this high‑level map, each platform can be configured to accept a Docker image reference from the registry, spin it up, and expose it via a load balancer—all triggered by a single webhook or manual trigger.
2. Prerequisites
- Spring Boot Project – A stateless microservice with
application.ymlconfigured for externalized properties. - Cloud Accounts – AWS (IAM user with ECR and ECS permissions) and GCP (service account with GKE, GCR, and IAM roles).
- Docker CLI – Installed locally for manual builds, but not required if CI handles it.
- Terraform or Cloud‑SDK – For infrastructure provisioning, though this guide focuses on one‑click runtime.
- GitHub Actions Runner – For automated pipeline; alternatively, use GitLab CI or Azure Pipelines.
3. Building the Docker Image
The first step is packaging the Spring Boot application into a container. The following Dockerfile uses the lightweight jre-alpine base image, which keeps the final artifact under 50 MB:
FROM eclipse-temurin:17-jre-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
In the CI pipeline, a build job runs ./mvnw package -DskipTests followed by docker build -t myorg/springboot-app:${{ github.sha }} .. After building, the image is pushed to both registries:
- Amazon ECR:
aws ecr get-login-password | docker login --username AWS --password-stdin - Google Container Registry:
gcloud auth configure-docker
Tag the image consistently (e.g., myorg/springboot-app:latest) so both platforms can reference it.
4. Configuring AWS Fargate
4.1 Create an ECS Cluster
Using the AWS Console or Terraform, create an awsvpc network mode cluster. Enable Fargate launch type and specify the desired capacity provider.
4.2 Define a Task Definition
In the task definition, set networkMode to awsvpc and requiresCompatibilities to FARGATE. Include the image URI from ECR and expose the port (typically 8080). Use the cpu and memory values appropriate for a lightweight service.
4.3 Create a Service with One-Click
Set the desired count to 1 and attach an Application Load Balancer (ALB). Configure a target group with targetType=ip and health check path /actuator/health. Finally, add an EventBridge rule that triggers a Lambda function whenever a new image is pushed to ECR. The Lambda simply updates the ECS service to use the new image, achieving a one‑click deployment.
5. Configuring Google GKE
5.1 Set Up a GKE Cluster
Provision a managed cluster with autoscaling enabled. Ensure the node pool has sufficient CPU/memory for your microservice. Enable the Cloud Run for Anthos feature if you prefer a fully serverless experience.
5.2 Deploy with Helm
Create a simple Helm chart that references the GCR image. The values.yaml file contains:
image:
repository: gcr.io/my-project/springboot-app
tag: latest
service:
type: LoadBalancer
port: 8080
Run helm install springboot-app . and the chart will deploy a Deployment, Service, and Ingress. The Google Cloud Build trigger can call gcloud builds submit and then helm upgrade --install with the new image tag, completing the one‑click flow.
5.3 Cloud Run for Anthos Integration
For an even lighter footprint, convert the container to a Cloud Run for Anthos service by adding the cloud-run annotations in the Helm chart. This removes the need for a LoadBalancer and leverages Cloud Run’s autoscaling.
6. One-Click Deployment Workflow
With the pipeline and infrastructure set, a single button or webhook can trigger the entire flow:
- Commit and push code to GitHub.
- GitHub Actions runs the build job, pushes the Docker image to ECR and GCR.
- EventBridge (AWS) or Cloud Build trigger (GCP) notices the new image.
- A Lambda or Cloud Run job updates the ECS service or Helm release.
- Both services are restarted with the new container image.
- Health checks pass, and traffic is routed to the updated instances.
This automation eliminates manual docker commands, cluster updates, and load balancer reconfigurations.
7. Monitoring and Scaling
Observability is critical for stateless services. For AWS, attach CloudWatch Logs to the Fargate task and enable Application Load Balancer Access Logs. Create CloudWatch Alarms for CPU/Memory thresholds and set auto scaling policies.
On GKE, use Prometheus and Grafana to scrape kubelet and cAdvisor metrics. Istio can provide distributed tracing with Jaeger or OpenTelemetry. Configure Horizontal Pod Autoscaler with cpuUtilization targets.
8. Troubleshooting Common Pitfalls
- Image Not Found – Verify that the registry URI matches the task/service definition and that IAM permissions allow pulling.
- Health Check Failures – Ensure the
/actuator/healthendpoint is enabled and that the container is listening on the correct port. - Scaling Too Aggressive – Fine‑tune CPU/memory reservations to avoid unnecessary pod restarts.
- Cost Overruns – Use the AWS Cost Explorer and GCP Billing reports to monitor the impact of scaling events.
Conclusion
By combining a declarative pipeline, cloud-native registries, and serverless or managed orchestrators, stateless Spring Boot microservices can now be deployed with a single click. This approach not only reduces operational overhead but also guarantees consistent, repeatable releases across AWS Fargate and Google GKE, giving teams the agility to iterate quickly while maintaining high availability.
