Deploying Microservices on Kubernetes with Spring Cloud Gateway and Google OAuth2.0 Security.
In this post we are going to leverage spring cloud (spring cloud gateway + spring security + others) and kubernetes to build a robust and fault tolerent system which can easily scaled up and scaled down horizontally very easily and with almost 0 downtime.
First of all let us first get familair with the basics of Kubernetes (in the sense why are we using kubernetes here) and then will see how are using it in the above demonastration:
Kubernetes:
The official documentation of Kubernetes says:
“ Kubernetes is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.”
Why Kubernetes
Till the monolith everything used to be packaged in a single container and it could be manually scaled in or out with a little effort but with the rise of microservice now organisation starts to have 100s to 1000s of services with each service being scaled differently and hence the need container orchestration platform like Kubernetes.
Advantages:
- High Availablity
- Scalablity
- Disaster Recovery
Kubernetes Architecture
A working Kubernetes deployment is cluster which is a group of hosts (node) running container.
Generally, a kubernetes cluster must contain atleast 1 master node (or control panel) and multiple worker nodes where your actual application is deployed.
Something like this:
In the kubernetes Architecture above we already saw that both master node and worker node are made up of several component. Let’s breiefly see those components then we will move forward with our project.
Master Node
- API server: An API server is basically an entrypoint to K8s cluster. It can be UI, APIs or command line tool.
- Controller Manager: Keeps track of whats happening in the cluster.
- Scheduler: It basically schedules the container on different worker node based on the workload and avaialblity intelligently.
- etcd: It’s a key value store which basically holds the current state of the kubernetes cluster. It actually acts as a backup store which keeps the snapshot of the cluster nodes, container etc, which cloud be helpful in recovery.
Virtual Network
Ensures communiation between master node and Worker nodes.
Worker Node:
- Kubelet: The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. The kubelet works in terms of a PodSpec. A PodSpec is a YAML or JSON object that describes a pod
- Kube Proxy: Kube-Proxy is a Kubernetes agent installed on every node in the cluster. It monitors the changes that happen to Service objects and their endpoints. If changes occur, it translates them into actual network rules inside the node.
Kube-Proxy usually runs in your cluster in the form of a DaemonSet. But it can also be installed directly as a Linux process on the node. - Pod (Container): Pod is an bastraction over container which ensures you to only interact with the kubernetes layer rather than then undelying container. In short you can use which ever container technology you prefer like Docker or podman etc.
It is the smallest deployable unit in kubernetes. - Container Runtime (like Docker, Podman)
How pods interact?
In kubernetes pods are where the container resides. So how do these pods interact. The very obvious answer should be therough IPs and it is right to some extent but there is one problem, in a kubernetes pod can die any time due to multiple reasons like resource unavailiblity and the disaster recovery property of kubernetes will create the new pod insted of the older one and the new pod formed might have different ip address. So now whenever the pod died we will have the manual step of configuring the ip and this basically ruins the whole puropse of container orchestration.
To deal with the issue, kubernetes introduced the concept of Service API.
Service:
The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network. Each Service object defines a logical set of endpoints (usually these endpoints are Pods) along with a policy about how to make those pods accessible.
Usage:
- Provides permanent ip to the pods or group of pods (replicas).
- Act as a Load Balancer for the same group of pods (replicas).
- While dealing with stateful deployment like databases the normal deployment is not considered an ideal practice. The ideal approach is to use statefulset by embedding phyiscal storage. For this post it is out of scope but you can refer the Statefulset over kubenetes for details. In case you need.
Ingress:
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
In general, the use of ingress for our case, if we use is to expose our cluster to outer world preferablly using DNS.
You can find all details here: https://kubernetes.io/docs/concepts/services-networking/ingress/
There are few more components like
Secret, ConfiMaps to store secrets and application configurations respectivly. You can use those based on your use cases.
With this we are clear on our kubernetes basics. Let’s now move to our application.
Our application consists of following:
- Rest APIs (Account, Orders) [3 pods]
https://github.com/Parasmani300/cloud-accounts
https://github.com/Parasmani300/cloud-orders - Eureka Service Discovery Server:
When a client registers with Eureka, it provides meta-data about itself — such as host, port, health indicator URL, home page, and other details. Eureka receives heartbeat messages from each instance belonging to a service. If the heartbeat fails over a configurable timetable, the instance is normally removed from the registry.
This will basically be useful for our gateway to get information of our registred application and work on it.
https://github.com/Parasmani300/cloud-service-discovery - Cloud Gateway + OAuth2.0 Security (inside it):
Kubernetes also provide out of the box solution like Nginx Ingress controller for setting up gloud gateway along with security but since we are relying on spring cloud for all the services, hence we are using spring cloud gateway for fine grain control of all the services as per our requirement.
https://github.com/Parasmani300/cloud-gateway
The REST APIs are simple controller with two methods get and post with mySQL db you can look into the code.
We have containerized the application and hosted to DockerHub to be used during deployment of the service to cluster.
You can find all docker images here:
docker push parasmani300/cloud-gateway:1.0
docker push parasmani300/account:1.0
docker push parasmani300/orders:1.0
docker push parasmani300/service-discovery:1.0
So let us now deploy the orders and account to kubernetes cluster and understand the deployment and service files.
Orders:
apiVersion: apps/v1
kind: Deployment
metadata:
name: orders-deployment
spec:
replicas: 3
selector:
matchLabels:
app: orders
template:
metadata:
labels:
app: orders
spec:
containers:
- name: orders
image: parasmani300/orders:1.0
imagePullPolicy: Always
ports:
- containerPort: 8081
env:
- name: DB_PASSWORD
value: pass
- name: DB_URL
value: jdbc:mysql://192.168.49.2:31454/my_db
- name: DB_DDL_AUTO
value: create
- name: EUREKA_SERVER
value: http://service-discovery-service.default.svc.cluster.local:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: orders-service
spec:
selector:
app: orders
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: ClusterIP
The above config have two files Deployment and Service.
apiVersion -> Simply tells the version of the app
kind: Deployment -> This tells what action to take lie create a deployment or Service etc
metadata:
name: orders-deployment -> Sets the name of the application being deployed
replicas: 3 -> Tells how many pods to deploy
spec:
containers:
- name: orders
image: parasmani300/orders:1.0
imagePullPolicy: Always
ports:
- containerPort: 8081
env:
- name: DB_PASSWORD
value: pass
- name: DB_URL
value: jdbc:mysql://192.168.49.2:31454/my_db
- name: DB_DDL_AUTO
value: create
- name: EUREKA_SERVER
value: http://service-discovery-service.default.svc.cluster.local:8761/eureka
The above part is something we need to look into as it is all the config
we supply to to pull the container image, set the port, set the env variables
and all sort of app config in this section.
apiVersion: v1
kind: Service
metadata:
name: orders-service
spec:
selector:
app: orders
ports:
- protocol: TCP
port: 8081
targetPort: 8081
type: ClusterIP
The above is the service file for orders service.
****
selector:
matchLabels:
app: orders
The matchlabels of deployment should be same as selector for the service for
the service to be mapped to the pods as illustrated above.
spec:
selector:
app: orders
protocol: TCP: This specifies the protocol used for communication.
In this case, it's TCP, which is a common protocol used for reliable and
ordered delivery of data packets.
port: 8081: This defines the port number on which the Service will listen
for incoming traffic. In this example, the Service will listen on port 8081
for incoming TCP traffic.
targetPort: 8081: This specifies the port to which the traffic will be
forwarded inside the Pods that are targeted by this Service.
In other words, when traffic is directed to this Service,
Kubernetes will forward that traffic to port 8081 of the Pods
associated with this Service.
type: This field basically specifies what type of service we are using in the servie.
It could be generally one of the four:
- ClusterIP: This is the default type. It exposes the Service on an internal IP in the cluster. The Service is only reachable from within the cluster.
- NodePort: This type exposes the Service on a port across all nodes in the cluster. It allocates a specific port on each node, and traffic to that port is forwarded to the Service. This makes the Service accessible from outside the cluster by using any node’s IP address and the allocated port.
- LoadBalancer: This type provisions an external load balancer in the cloud environment (if supported) and assigns a unique external IP to the Service. The external load balancer routes traffic to the Service, making it accessible from outside the cluster.
- ExternalName:This type maps the Service to a DNS name rather than an IP address. It is useful when you want to direct traffic to an external service located outside the cluster.
It is very important to choose the type, as type like LoadBalancer and ExternalName have use cases but it is an expensive option, So if an application is supposed to have communication in cluster then better to keep them of type ClusterIP
As in the case above. As any request coming to application or going is supposed to happen through gateway. Also, service registration is also happening inside cluster application itself.
Similarly, we have the Account deployment and service file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: account-deployment
spec:
replicas: 1
selector:
matchLabels:
app: account
template:
metadata:
labels:
app: account
spec:
containers:
- name: account
image: parasmani300/account:1.0
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: DB_PASSWORD
value: pass
- name: DB_URL
value: jdbc:mysql://192.168.49.2:31454/my_db
- name: DB_DDL_AUTO
value: create
- name: EUREKA_SERVER
value: http://service-discovery-service.default.svc.cluster.local:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: account-service
spec:
selector:
app: account
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
Similarly, for service discovery deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-discovery-deployment
spec:
replicas: 1
selector:
matchLabels:
app: service-discovery
template:
metadata:
labels:
app: service-discovery
spec:
containers:
- name: service-discovery
image: parasmani300/service-discovery:1.0
imagePullPolicy: Always
ports:
- containerPort: 8761
---
apiVersion: v1
kind: Service
metadata:
name: service-discovery-service
spec:
selector:
app: service-discovery
ports:
- protocol: TCP
port: 8761
targetPort: 8761
type: LoadBalancer
Mind you, Since all application registers to service discovery. So practically service discovery server should be first to start before any other applications.
Spring Cloud gateway + OAuth2.0 (google)
To enable security we need to add the configuration to our spring boot:
@Configuration
@EnableWebFluxSecurity
public class SecurityConfig{
@Bean
public SecurityWebFilterChain springSecurityChain(ServerHttpSecurity serverHttpSecurity)
{
serverHttpSecurity.authorizeExchange(exchanges -> exchanges.pathMatchers(HttpMethod.GET).permitAll()
.pathMatchers("/**").authenticated())
.oauth2ResourceServer(oAuth2ResourceServerSpec -> oAuth2ResourceServerSpec.jwt(Customizer.withDefaults()));
serverHttpSecurity.csrf(csrfSpec -> csrfSpec.disable());
return serverHttpSecurity.build();
}
}
We also need to have below gradle config:
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.security:spring-security-oauth2-resource-server:6.2.3'
implementation 'org.springframework.security:spring-security-oauth2-jose:6.2.3'
// implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.cloud:spring-cloud-starter-gateway'
implementation 'org.springframework.cloud:spring-cloud-starter-netflix-eureka-client'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'io.projectreactor:reactor-test'
Properties file:
spring.application.name=Gateway
server.port=8084
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER}
management.endpoints.web.exposure.include=*
info.app.name=Gateway Server Microservice
info.app.description=Gateway Server Application
info.app.version=1.0.0
management.info.env.enabled = true
management.endpoint.gateway.enabled=true
spring.cloud.gateway.discovery.locator.enabled=true
spring.cloud.gateway.discovery.locator.lowerCaseServiceId=true
eureka.instance.preferIpAddress=true
spring.security.oauth2.client.registration.google.clientId=${GOOGLE_CLIENT_ID}
spring.security.oauth2.client.registration.google.clientSecret=${GOOGlE_CLIENT_SECRET}
spring.security.oauth2.client.registration.google.scope=openid,profile,email
spring.security.oauth2.resourceserver.jwt.issuer-uri=https://accounts.google.com
spring.security.oauth2.resourceserver.jwt.jwk-set-uri=https://www.googleapis.com/oauth2/v3/certs
With this our gateway application is ready. You can find the full code here.
You can also pull the docker container from here
docker push parasmani300/cloud-gateway:1.0
The deployment file for the gateway is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloud-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: cloud-gateway
template:
metadata:
labels:
app: cloud-gateway
spec:
containers:
- name: cloud-gateway
image: parasmani300/cloud-gateway:1.0
imagePullPolicy: Always
ports:
- containerPort: 8084
env:
- name: EUREKA_SERVER
value: http://service-discovery-service.default.svc.cluster.local:8761/eureka
- name: GOOGLE_CLIENT_ID
value: CLIENT_ID
- name: GOOGlE_CLIENT_SECRET
value: CLIENT_SECRET
---
apiVersion: v1
kind: Service
metadata:
name: cloud-gateway
spec:
selector:
app: cloud-gateway
ports:
- protocol: TCP
port: 8084
targetPort: 8084
type: LoadBalancer
On the gateway part I think we are clear that ateway acts as an API gateway, allowing you to route incoming HTTP requests to different backend services based on various criteria such as URL paths, headers, or query parameters.
The section to understand is the working of google OAuth2.0 here:
This image basically gives a brief working of OAuth2.0. You can refer the this link for more details.
If you see the example above we require the google oauth2.0 Client id and secret.
To get same, login to your GCP console:
To the left bar go to APIs and Services > Credentials
Select OAuth Client ID
Select web application and give any app name
Next add the gateway url in the request and and in redirect the url of your frontend app. In my case I am using postman so i have put that url.
It will pop up a screen like this which will have clientId and clientSecret
Next Click on OAuthConsentScreen and click Edit App and Save & continue to Scopes page
Add these three scopes in your app.
— — — DEMO — — — — —
Now, let us demo the application and then we can end our post:
For the illustration I am using minikube. You can use any cloud if you have any. After logging in to cloud the step will remain same as I am illustrating.
minikube start
This will start the kubernetes cluster on your local.
You can check the status of minikube anytime using:
minikube status
To view the default kubernetes dashboard the command is:
minikube dashboard
Generally, we interact with K8s im command line using kubectl if you are on cloud it is genreally preinstalled. If not install it.
To view the active pods at any time:
kubectl get pods
The commands for kubectl is very intutive. e.g
The command to get services is:
kubectl get services
Command to get all ingress is:
kubectl get ingress
Deploying your apps to kubernetes:
kubectl apply -f file-deployment.yaml
#replace the file-deployment.yaml with your proper deployment file in
# proper directory. e.g:
kubectl apply -f service-discovery-deployment.yaml
kubectl apply -f orders-deployment.yaml
kubectl apply -f account-deployment.yaml
kubectl apply -f cloud-gateway-deployment.yaml
# It is suggested to start the app in the orders given as service discovery
# is required to be up to apps to register and apps need to be up to be
# discovered by gateway.
Once the apps are up.
kubectl get pods
kubectl get svc
If all the services and deployment are working for you then you are ready to test your application.
Get the ip and the external port of the cloud gateway and you are ready to go:
If you are on minikube the command is:
minikube ip
The base address of your cloud gateway would be:
http://{your-ip}:{port}/[app-name-registered-on-service-discovery]/endpoint
e.g:
POST http://192.168.49.2:32285/account/api/v1/save
get http://192.168.49.2:32285/account/api/v1/get/1
If you see the config file for the gateway security. You would observe that we have peromitted all GET all to remain public and all other calls are authenticated by token.
First we will demo get call:
Next let’s do a post call:
If failed for 401 Unauthorized. Now let’s generate a token and then make the same call.
In the Type of Authorization of post man select: OAuth2.0
Callback Url: https://www.getpostman.com/oauth2/callback
Auth URL: https://accounts.google.com/o/oauth2/auth
Access token URL: https://oauth2.googleapis.com/token
Go to configure token below:
Then click Create New Token
Sign In and Authorize.
It will give you page like this:
Copy Id token and put with Bearer Token option
We got a success now!!!!!!!!!!.
This was a demo using minimal features. Things could have been more complicated if we have supplied passwords using Secrets and had kept all configs in ConfigMaps. We can try it sometime later.