Shrestha Rajat


Search IconIcon to open search

Last updated Jul 9, 2023 Edit Source


#Azure #cloud #Kubernetes #microservices #managed-kubernetes-cluster

AKS is a fully managed Kubernetes container management service by Microsoft Azure. Along with Azure DevOps It offers a robust Serverless Continios Integration and Contious Deployment experience.

In AKS the users will just need to pay for the nodes that are being used. Which means the other services such as the Azure managed Control pane is free AKSAKSof cost.

# Control plane

When you create an AKS cluster, a control plane is automatically created and configured. This control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only on the region where you created the cluster.

Pasted image 20220928225117

# Node pools

Node pools use virtual machine scale sets as the underlying infrastructure to allow the cluster to scale the number of nodes in a node pool. New nodes created in the node pool will always be the same size as you specified when you created the node pool. Pasted image 20220930162008

# Networking

An Azure Kubernetes Service (AKS) cluster blocks all inbound traffic from the internet to the cluster to assure network security. Deployed workloads in Kubernetes are, by default, only accessible from inside the cluster. To expose the applications to the outside world, you need to open specific ports and forward them to your services. The network configuration for containers is temporary. A container’s configuration and the data in it isn’t persistent between executions. After you delete a container, all information is gone unless it’s configured to use a volume. The same applies to the container’s network configuration and any IP addresses assigned to it.

Kubernetes has two network availability abstractions that allow you to expose any app: services and ingresses They’re both responsible for allowing and redirecting the traffic from external sources to the cluster.

# Service

A Kubernetes service is a workload that abstracts the IP address for networked workloads. A Kubernetes service acts as a load balancer and redirects traffic to the specified ports by using port-forwarding rules.

A diagram that shows two Kubernetes services. The first service is applied to one pod. The second service is applied to two pods.

You define a service in the same way as a deployment, by using a YAML manifest file. The service uses the same selector key as deployments to select and group resources with matching labels into one single IP.

A Kubernetes service needs four pieces of information to route traffic.

Target resourceThe target resource is defined by the selector key in the service manifest file. This value selects all the resources with a given label onto a single IP address.
Service portThis port is the inbound port for your application. All the requests come to
Network protocolThis value identifies the network protocol for which the service will forward network data.
Resource portThis value identifies the port on the target resource on which incoming requests are received. This port is defined by the targetPort key in the service manifest file.

# Types of services:

Services can be of several types. Each type changes the behavior of the applications selected by the service.

# Ingress Controller

Ingress controllers provide the capability to deploy and expose your applications to the world without the need to configure network-related services.

Ingress controllers create a reverse-proxy server that automatically allows for all the requests to be served from a single DNS output. You don’t have to create a DNS record every time a new service is deployed. The ingress controller will take care of it. When a new ingress is deployed to the cluster, the ingress controller creates a new record on an Azure managed DNS zone and links it to an existing load balancer. This functionality allows for easy access to the resource through the internet without the need for additional configuration.

Pasted image 20220930162708

Pasted image 20220930162112

# Demo

To setup a basic AKS cluster which simply hosts a static webapp.

# Requirements

  1. An Azure subscription with basic authority to create AKS cluster and related services (i.e. Compute instances, Container Registry).

# Steps:

  1. Create a resource group to organize the azure services that are going to be used in the project.
  2. Create the cluster - Select the resource-group for the AKS Cluster - Fill in the basic stuffs (Cluster name, Region, Kubernetes version) - Create only 1 node - Configure other settings (node-pools, authentication, networking, Integration and tags) - Create the cluster
  3. copy the azure-vote deployment yml and apply it TODO: modify yml to run simple nginx webserver to host a page
# create a container registry and resource group
az group create --name test-nginx --location uksouth
az acr create --resource-group test-nginx \
--name testNginx --sku Basic

# upload docker image to acr
cd ~/Documents/nginx-example
az acr build --image testnginx \
--registry testNginx \
--file dockerfile .

#  create a aks cluster
az group create --name test-nginx --location uksouth
az aks create --resource-group test-nginx --name nginx-cluster --node-count 2 --enable-addons monitoring --generate-ssh-keys

# after setting up the cluster set the local cli scubcription for the kubectl
az account set --subscription xxxxxxxxxxxxx
az aks get-credentials --resource-group test-nginx --name nginx-cluster
# after running the previous command your local machine 
# will have access to the kubernetes api and you can finally run kubectl command

# download the azure-vote yml
curl > azure-vote.yml

# apply the azure-vote.yml deployment
kubectl apply -f azure-vote.yml

# fetch the ip of the loadbalancer 
get svc azure-vote-front --watch

# References