4 minutes
Using ArgoCD with Azure Private AKS cluster
Recently, I unraveled the need to tie ArgoCD to an Azure Private AKS cluster. The reason is quite obvious, as the AKS cluster was needed as a target cluster within ArgoCD.
An Azure Private AKS cluster is an instance of the Azure Kubernetes Service, where the API address is only exposed as a RFC1918 IP. This ensures that any traffic to the API is only passed within private networks and never exposed to the internet at large. For further information, such as how to set it up and access it from a client machine, see the following documentation: https://docs.microsoft.com/en-us/azure/aks/private-clusters
The challenges
Of course, it would be too easy if this just worked out of the box. I faced two main challenges, your milage may vary.
DNS lookup to AKS API
By default, Azure will provide you with a DNS hostname for your API. This is not reachable via the DNS root zone. Hence, some form of name resolution forwarding is necessary. Locally, one can use the hosts file to fake resolution like the one below.
echo '192.168.1.10 privatelink.<region>.azmk8s.io' >> /etc/hosts
In a Kubernetes cluster, this workaround is not feasible.
ArgoCD client TLS settings
In a production environment, using the insecure directive within the cluster definition is a big NO. You do not want to have a man-in-the-middle incepting you cluster traffic. Hence, the certificate authority (CA) information needs to be introduced to Argo. This would be no problem, if the CA was a common well-trusted instance. Sadly, Azure AKS does not hand out such a certificate but rather a self-signed one.
The solution(s)
CoreDNS forwarding to the rescue
CoreDNS is the defacto standard to provide incluster DNS resolution as well as forwarding. The later feature will be set up below to tackle the first problem. The great official dochment to read up on the matter is linked below. That will give you insights on how this works.
OpenShift DNS operator
Having a Kubernetes distribution that embraces the full potential of the platform is awesome. It even bites you in the four letters if you make changes to the managed components like the one above (CoreDNS) directly. “Why?”: I hear you ask. Everything within Openshift is managed by an operator and it ensures the declared state. So, I needed to adjust the DNS operator settings to persist the required forwarding settings within CoreDNS.
A great article about changing the DNS operator can be found here: https://rcarrata.com/openshift/dns-forwarding-openshift/
For simplicity, I will jot down the necessary tasks for OpenShift below.
Create a DNS forwarder (proxy or full blown DNS server) within your Azure Subscription and VNet. This will forward your request to the internal DNS endpoint.
Note: This is necessary as the special DNS endpoint that Azure provides to resolve internal DNS queries is not publically reachable. https://docs.microsoft.com/en-us/azure/private-link/private-endpoint-dns
Create a fowarding yaml (ex. openshift-forwarding.yaml) manifest which looks something like the following snippet:
apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: aks-private zones: - privatelink.asdfghjkl.tld forwardPlugin: upstreams: - 2.2.2.2:5353 # IP of your DNS forwarder from step 1
Apply settings from previous step
#> oc apply -f openshift-forwarding.yaml
ArgoCD declarative setup
The declarative setup consists of a secret manifest with specific labels to tell Argo that this is a target cluster defintion. The fields most interesting for us are the TLS client settings. For the sake of simplicity, I have included highly sensitive data within the Kubernetes secret. Make sure that secrets (such as the bearer token below) are never checked into source code management. Instead I recommend the use of a secrets management strategy.
apiVersion: v1
kind: Secret
metadata:
annotations:
managed-by: argocd.argoproj.io
labels:
argocd.argoproj.io/secret-type: cluster
name: aks-eu
type: Opaque
stringData:
config: |
{
"bearerToken": "eyJhbGO...",
"tlsClientConfig": {
"insecure": false,
"caData": "LS0tLS1C...=="
}
}
name: aks-eu
namespaces: demo
server: https://hexvalue.hexvalue.privatelink.asdfghjkl.tld
Note the insecure directive is set to false which forces TLS cert validation. The target cert is validated against the CA cert provided in the respective directive.
If you are wondering where to get the required values from, check your kubeconfig context.
Summary
Getting this all stiched up was a little tricky as we had to change fundamental infrastructure underneath Argo to actually fly. Once DNS as well as the information necessary for the Argo Cluster secret was setup, things started to work immediately.