Deploying Internal Applications with private IPs on Azure Kubernetes Service (AKS)

Overview

In this post, we’ll look into how we can use Azure’s Kubernetes Service (AKS) to host internal applications without exposing them to the world wide web. Undeniably, Kubernetes gained massive interest of the community over the past years. However, while Kubernetes is often used to run web-facing applications, especially enterprise customers start leveraging Kubernetes for hosting internal facing applications.
To achieve this on Azure, we’ll leverage an internal load balancer for exposing the applications to a virtual network (VNet) within Azure, so that users can access them privately.
Exposing our applications on AKS to our internal clients only
Exposing our applications on AKS to our internal clients only

By the way: Azure’s Kubernetes Service (AKS) went generally available last month, so let’s get started!

Provisioning a Kubernetes Cluster with AKS

Before getting starting, we need to make sure we have our Azure CLI up and running in the latest version. Next, let’s install kubectl so we can talk to our Kubernetes cluster later:
$ az aks install-cli
In our example, we’ll deploy AKS into a new resource group privateaks in Azure:
$ az group create --name privateaks --location westeurope
{
  "location": "westeurope",
  ...
}
By using a separate resource group, we can easily clean up afterwards.
Up next, we will create a VNet in Azure, that will host our Kubernetes cluster, our internal load balancer for our application(s), as well as the associated IP addresses:
$ az network vnet create \
--resource-group privateaks \
--name aksvnet \
--address-prefix 10.10.0.0/16 \
--subnet-name akscluster \
--subnet-prefix 10.10.0.0/24

Our response looks good:

 
  "newVNet": 
    "addressSpace": 
      "addressPrefixes": 
        "10.10.0.0/16"
      ]
    },
    ... 

"id":"/subscriptions/xxxxx-xxxxx-xxxxxx/resourceGroups/privateaks/providers/Microsoft.Network/virtualNetworks/aksvnet",
    "location":"westeurope",
    "name":"aksvnet",
    "provisioningState":"Succeeded",
    "resourceGroup":"privateaks",
    "resourceGuid":"....",
    "subnets": 
       
        "addressPrefix":"10.10.0.0/24",
        ... 

"id":"/subscriptions/xxxxx-xxxxx-xxxxxx/resourceGroups/privateaks/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/akscluster",
        "name":"akscluster",
        "provisioningState":"Succeeded",
        "resourceGroup":"privateaks",
        ...
      }
    ],
    "tags":{},
    "type":"Microsoft.Network/virtualNetworks",
    "virtualNetworkPeerings":[]
  }
}
Next, we can start deploying our Kubernetes cluster into our VNet. In this case, we need to specify the vnet-subnet-id, which we can copy over from the past command:
az aks create --name privateakscluster \
              --resource-group privateaks \
              --location westeurope \
              --node-count 1 \
              --vnet-subnet-id "/subscriptions/xxxxx-xxxxx-xxxxxx/resourceGroups/privateaks/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/akscluster" \
              --dns-name-prefix private-aks-cluster-test
If we care about the internal IPs of our cluster, we can specify those via the following command-line parameters:
--pod-cidr
  A CIDR notation IP range from which to assign pod IPs when kubenet is used.
--service-cidr
  A CIDR notation IP range from which to assign service cluster IPs.
--docker-bridge-address
  An IP address and netmask assigned to the Docker bridge.
This command will take a while to finish, but afterwards, we’ll be presented with a shiny, new Kubernetes cluster.

Deploying an Internal Application with private IP Addresses

With our cluster running, we can start deploying our application on it. Firstly, we need to grab the credentials for it:
$ az aks get-credentials --resource-group privateaks --name privateakscluster
Merged "privateakscluster" as current context in /Users/csiebler/.kube/config
During the deployment process, Azure has created a service principal (SP) for our Kubernetes cluster. This SP has permission to access and manage our cluster. It is used in the background when we use the kubectl command, e.g., when checking the nodes of our cluster:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
aks-nodepool1-27891563-0 Ready agent 23m v1.9.6
We can retrieve information about our SP by having a look at the configuration file:
$ cat ~/.azure/aksServicePrincipal.json
 
  "xxxxx-xxxxx-xxxxxx": 
    "client_secret":"xxxxxxxxxxxx",
    "service_principal":"c16424e2-c3f8-xxxx-xxxx-xxxxxxxxxx"
  }
}
It seems that there currently is a small bug when using this Service Principal with AKS. In fact, it looks like the Service Principal is not properly added to the subnet in which our AKS should deploy applications to. Hence, we need to manually add the SP via the Azure Portal to it:
  1. Navigate to Resource groups in the Azure Portal
  2. Select the privateaks resource group
  3. Select the aksvnet VNet
  4. Select Subnets and choose akscluster subnet
  5. Click Manage Users and click the + Add button
  6. In the Add dialog, select the Owner role (Contributor isn’t enough) and paste Service Principal ID that we retrieved before (e.g., c16424e2-c3f8-xxxx-xxxx-xxxxxxxxxx)
Adding our AKS Service Principal to our Subnet
Adding our AKS Service Principal to our Subnet
Hit Save and we’re done.
Next, we can create our voting-private.yaml deployment file for our application:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: azure-vote-back
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: azure-vote-back
    spec:
      containers:
      - name: azure-vote-back
        image: redis
        ports:
        - containerPort: 6379
          name: redis
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-back
spec:
  ports:
  - port: 6379
  selector:
    app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: azure-vote-front
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: azure-vote-front
    spec:
      containers:
      - name: azure-vote-front
        image: microsoft/azure-vote-front:v1
        ports:
        - containerPort: 80
        env:
        - name: REDIS
          value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-front
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: azure-vote-front
While this example is fairly straight-forward, the important piece is the service.beta.kubernetes.io/azure-load-balancer-internal: "true" annotation, which tells AKS to deploy an internal load balancer for our application.
Finally, we can deploy our application to AKS:
$ kubectl apply -f voting-private.yaml
deployment.apps "azure-vote-back" created
service "azure-vote-back" created
deployment.apps "azure-vote-front" created
service "azure-vote-front" created
Once our application is running, we can view the details:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure-vote-back ClusterIP 10.0.58.87 <none> 6379/TCP 28m
azure-vote-front LoadBalancer 10.0.58.56 10.10.0.5 80:30363/TCP 28m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 55m
As we can see, our frontend application features an “external IP”, which is in our private subnet. Success! If we have a look at Azure, we can also see that an internal load balancer has been deployed:
Our AKS deployment in Azure
In case your application does not deploy properly, we can perform a quick check if the services have problems getting deployed:
$ kubectl describe services
Especially if you don’t add the Service Principal as a Owner of the VNet subnet, you most likely will experience the following error:
Warning CreatingLoadBalancerFailed 1m (x6 over 6m) service-controller Error creating load balancer (will retry):
failed to ensure load balancer for service default/azure-vote-front: ensure(default/azure-vote-front): lb(kubernetes)
failed to ensure host in pool: "network.InterfacesClient#CreateOrUpdate: Failure responding to request: StatusCode=403
Original Error: autorest/azure: Service returned an error. Status=403 Code=\"LinkedAuthorizationFailed\" Message=\"The client 'xxxxxxxxxxx' with object id 'xxxxxxxxxxx' has permission to perform action 'Microsoft.Network/networkInterfaces/write' on scope '/subscriptions/xxxxx-xxxxx-xxxxxx/resourceGroups/MC_privateaks_privateakscluster_westeurope/providers/Microsoft.Network/networkInterfaces/aks-nodepool1-27891563-nic-0';
however, it does not have permission to perform action 'Microsoft.Network/virtualNetworks/subnets/join/action' on the linked scope(s) '/subscriptions/xxxxx-xxxxx-xxxxxx/resourceGroups/privateaks/providers/Microsoft.Network/virtualNetworks/aksvnet/subnets/akscluster'.\""

Testing the Application

From within our subnet, we should now be able to access our application under 10.10.0.5:
Our app is running on a private IP in Azure
I just used a Windows VM running in the same subnet to test access. If you have a proper VPN or Express Route to your on-premises network (including the necessary routing), you should also be able to access the application from our local machine.

Summary

Azure Kubernetes Service (AKS) is a hassle free option to run a fully managed Kubernetes cluster on Azure. By deploying the cluster into a Virtual Network (VNet), we can deploy internal applications without exposing them to the world wide web. This is especially interesting for enterprise customers who want to move existing and new applications to Kubernetes, but do not want them to be exposed to the internet.
If you have any questions, feel free to reach out to me on Twitter @clemenssiebler].

Leave a Reply

Your email address will not be published. Required fields are marked *