In May 2019, Network Policies on Azure Kubernetes Service (AKS) became generally available through the Azure native policy plug-in or through the community project Calico. This user-defined network policy feature enables secure network segmentation within Kubernetes and allows cluster operators to control which pods can communicate with each other and resources outside the cluster.

In this tutorial, we’ll practice building four different Network Policies:

  1. WEB is exposed and accessible publicly from the Internet.
  2. WEB talks to API.
  3. API talks to DB.
  4. No other exposures nor communications.

For additional background, I encourage reading the article Integrating Azure CNI and Calico: A technical deep dive, where you’ll see all the key concepts explained from a networking perspective with AKS. Furthermore, here is a Kubernetes tutorial, Calico tutorial and AKS tutorial to help you get started.

Some things to keep in mind before you jump into the tutorial:

    • By default, any pods could communicate with any other pods across namespaces within a Kubernetes cluster — it’s by design. However, Kubernetes Network Policies will guarantee the “Just Enough Access” principle of your Security posture.
    • By default, there is no plugin pre-installed with Kubernetes to apply such Network Policies. You need to install this plugin, otherwise your Network Policies won’t have any effect.
    • With AKS, you have the option between Azure or Calico as your Network Policy plugin. You can only define this at the cluster creation; update is not yet supported. With Calico Network Policy you could either use Kubenet or Azure CNI, but for the Azure Network Policy it’s only with Azure CNI (not Kubenet). Network Policies are not yet supported for Windows nodes with AKS.
    • Both Azure and Calico Network Policy plugins are open source: Azure and Calico.

Network policies rules diagram

Let’s do it!

First, we need to provision an AKS cluster with Network Policies enabled, for the purpose of this demo below, we will use Calico:

az aks create...  --network-policy calico

To start and illustrate this we need to deploy those Pods and Services:

ns=yournamespace
kubectl create ns $ns
kubectl config set-context --current --namespace $namespace
kubectl apply -f https://raw.githubusercontent.com/mathieu-benoit/k8s-netpol/master/db-api-web-deployments.yaml

You now have three Pods and three Services:

kubectl get pod,svc

We could check that WEB is publicly accessible (you may need to wait for few seconds to get the Public IP provisioned):

curl $(kubectl get svc web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Our first test is to see that any pods could communicate with others even externally, let’s run few successful commands:
kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --rm -it --generator=run-pod/v1
# curl https://kubernetes.io
# curl http://db:15984
# exit

Let’s apply the first important Network Policy, Deny all ingress and egress!

kubectl apply -f https://raw.githubusercontent.com/mathieu-benoit/k8s-netpol/master/deny-all-netpol.yaml

We can check that WEB isn’t publicly accessible anymore:

curl --connect-timeout 2 $(kubectl get svc web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Let’s also rerun the two previous tests, which should fail now:

kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --rm -it --generator=run-pod/v1

curl --connect-timeout 2 https://kubernetes.io

curl --connect-timeout 2 http://db:15984

# exit

Actually, no one could communicate from/to that namespace at this stage, that’s what we want. Now, let’s be more granular and illustrate the “Least Access” and “Just Enough Access” Security Principles.

First, DB is accessible only from API on port 5984 and doesn’t have access to anything:

kubectl apply -f https://raw.githubusercontent.com/mathieu-benoit/k8s-netpol/master/db-netpol.yaml

Let’s validate that DB doesn’t have access to anything:

kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --labels app=db --rm -it --generator=run-pod/v1

curl --connect-timeout 2 http://web:80

curl --connect-timeout 2 https://kubernetes.io

# exit

Now, API has only access to DB on port 5984 and is accessible only from WEB on port 3000:

kubectl apply -f https://raw.githubusercontent.com/mathieu-benoit/k8s-netpol/master/api-netpol.yaml

Actually, we also need to do an extra action here by adding a Label on the kube-system Namespace (NetworkPolicies are all about Labels):

kubectl label ns kube-system name=kube-system

Let’s validate that API has access to DB, but doesn’t have access to WEB or Internet:

kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --labels app=api --rm -it --generator=run-pod/v1

curl http://db:15984

curl --connect-timeout 2 http://web:80

curl --connect-timeout 2 https://kubernetes.io

# exit

Finally, WEB only has access to API on port 3000 and is only accessible from the Internet on port 80:

kubectl apply -f https://raw.githubusercontent.com/mathieu-benoit/k8s-netpol/master/web-netpol.yaml

Let’s validate that WEB has access to API, but doesn’t have access to DB or Internet:

kubectl run curl-$RANDOM --image=radial/busyboxplus:curl --labels app=web --rm -it --generator=run-pod/v1

curl http://api:8080

curl --connect-timeout 2 https://kubernetes.io

curl --connect-timeout 2 http://db:15984

# exit

We could check that WEB is publicly accessible again:

curl $(kubectl get svc web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Here we are! We have secured communications for our three Pods: WEB, API, and DB by defining the very strict minimal requirements, nothing less and nothing more.

Some gotchas:

  • It’s all about Labels on Pods and Namespaces
  • It’s not about Services , nor the ports exposed there, it’s about Pods‘ ports
  • You could use podSelector and namespaceSelector
  • Again, the scope is per Namespace, but via the namespaceSelector for Ingress or Egress you could reference external Namespaces. You could use GlobalNetworkPolicy with Calico to apply rules across Namespaces
  • To be able to reach out to another Pod via its Service name exposure you need to add an Egress rule for the DNS resolver (with the label k8s-app=kube-dns) in the kube-system Namespace. We saw that we need to add a label name=kube-system on the kube-system Namespace.
  • Network Policy Engine is doing the union of all the rules, Rule1 OR Rule2 OR…
  • Default rules are for Ingress, as soon as you are adding Egress you need to specify this in the policyTypes: section

Resources:

Hope you enjoyed this blog article and the associated resources and are able to leverage this as a part of your own Security posture with Kubernetes.

Questions or comments, please let me know in the comments below. Cheers!