Deploying an Azure Kubernetes cluster with Portworx

In this post we illustrate how to deploy Portworks on Kubernetes on Azure using the ACS engine.

Requirements

  • Bash
  • Azure CLI 2.0 you can install it here, or use the Azure Cloud Shell in your browser (I am currently using azure-cli (2.0.23))
  • jq for parsing data from json responses on bash.
  • Azure subscription where you are the owner or have a Service Principal with at least Contributor role to the subscription.
  • SSH keys in your local machine
  • VSCode for file editing

Setting up the environment

  1. Download the latest version of the acs-engine in your computer. At the moment of writing this article, I am using v0.6.0 from their list of releases. If you are using Mac OS X, you can type in the following commands from your terminal:

     mkdir portworx-acs
     cd portworx-acs
     curl -L https://github.com/Azure/acs-engine/releases/download/v0.11.0/acs-engine-v0.11.0-darwin-amd64.tar.gz | tar zx
     cp acs-engine-v0.11.0-darwin-amd64/acs-engine .
     chmod +x acs-engine
     rm -rf acs-engine-v0.11.0-darwin-amd64/
    

    Just if you would like to check, the MD5 sum of the binary is the following: 53371680926b19e74bef391be6cf2037.

  2. Generate a Service Principal for your deployment, you can use my getAzureServicePrincipal.sh file to obtain it:

     curl -O https://gist.githubusercontent.com/brusMX/bab2224dc1b0c26d3aef4799cb97c045/raw/bf05884b5aeca9ae0c455af3ce0e695ec372cccc/getAzureServicePrincipal.sh
     chmod +x getAzureServicePrincipal.sh
     ./getAzureServicePrincipal.sh
    

    You will be using AZURE_CLIENT_ID and AZURE_CLIENT_SECRET in the next step, so keep them close. Also, remember that if you create a Service Principal for a subscription that was not the default one, you will have to run the following command:

     az account set -s <<CHOSEN SUBSCRIPTION ID>>
    

    Also, you can copy paste the variables suggested by the previous script so you can use them later as input to the json file we are going to deploy

  3. Set up environment variables that will define where the cluster will be located:

    1. Choose a location (you can choose any valid location):

       export LOC=southcentralus
      
    2. Name your Resource Group

       export RG_NAME=$(mktemp portworx-XXXXXXX)
      
    3. Obtain the dns name from your json file

       export DNS_NAME=$(mktemp portworx-XXXXXXX)
      
  4. Create your resource group where everything will be deployed (if something goes bad, you can always delete the whole RG and start again)

     az group create --name "${RG_NAME}" --location $LOC
    
  5. Create the json file that will be used to deploy the cluster. If you have all the right environment variables set up, you can copy paste the following command:

     cat <<EOT >> portworx-cluster.json
     {
         "apiVersion": "vlabs",
         "properties": {
             "orchestratorProfile": {
                 "orchestratorType": "Kubernetes",
                 "orchestratorVersion": "1.7.9",
                 "kubernetesConfig":{
                     "enableRbac": true
                 }
             },
             "masterProfile": {
             "count": 1,
             "dnsPrefix": "`echo $DNS_NAME`",
                 "vmSize": "Standard_D2_v2"
             },
             "agentPoolProfiles": [
             {
                 "name": "agent",
                 "count": 3,
                 "vmSize": "Standard_D2_v2",
                 "storageProfile" : "ManagedDisks",
                 "diskSizesGB": [128, 128, 128, 128],
                 "availabilityProfile": "AvailabilitySet"
             }
             ],
             "linuxProfile": {
             "adminUsername": "`echo $USERNAME`",
             "ssh": {
                 "publicKeys": [
                 {
                     "keyData": "`cat $HOME/.ssh/id_rsa.pub`"
                 }
                 ]
             }
             },
             "servicePrincipalProfile": {
             "clientId": "`echo $AZURE_CLIENT_ID`",
             "secret": "`echo $AZURE_CLIENT_SECRET`"
             }
         }
     }
     EOT
    
  6. Make sure that the variables dnsPrefix, adminUsername, keyData, clientId and secret were populated. If not open the file with VSCode and edit it with the right info: code portworx-cluster.json and update the file accordingly.

    1. Make sure to use an unique name for dnsPrefix.
    2. Paste the content of your public SSH key into keydata.
    3. Put the value of AZURE_CLIENT_ID on clientId.
    4. Update the value of secret with AZURE_CLIENT_SECRET

Deploying the cluster

Now that we have all our files in order, we can proceed to use ACS-Engine to create the artifacts of our cluster and deploy them.

  1. Create artifacts with ACS-Engine, you will see a folder called _output after you run the following command:

     ./acs-engine generate portworx-cluster.json
    
  2. Deploy your cluster to Azure with the CLI

     az group deployment create --resource-group="${RG_NAME}" --template-file="_output/${DNS_NAME}/azuredeploy.json" --name="${DNS_NAME}" --parameters @_output/${DNS_NAME}/azuredeploy.parameters.json
    

Connect to your cluster and test it out

After the cluster has been succesfully deployed we can obtain our kubeconfig file and start interacting with it.

  1. Obtain .kubeconfig file

     scp -o StrictHostKeyChecking=no $DNS_NAME.$LOC.cloudapp.azure.com:/home/$USERNAME/.kube/config config-$DNS_NAME
    
  2. Paste this file in your the. For this case I will create an extra file so you don’t have to worry about your previous k8s config files.

     mkdir ~/.kube
     mv config-$RG_DNS_NAME ~/.kube
     export  KUBECONFIG=~/.kube/config-$DNS_NAME
    
  3. Confirm that your cluster is up. You can start running the following commands and make sure that everything is working the way it should be:

     kubectl cluster-info
    
     Kubernetes master is running at https://k8scluster.southcentralus.cloudapp.azure.com
     Heapster is running at https://k8scluster.southcentralus.cloudapp.azure.com/api/v1/namespaces/kube-system/services/heapster/proxy
     KubeDNS is running at https://k8scluster.southcentralus.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns/proxy
     kubernetes-dashboard is running at https://k8scluster.southcentralus.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
     tiller-deploy is running at https://k8scluster.southcentralus.cloudapp.azure.com/api/v1/namespaces/kube-system/services/tiller-deploy/prox
    
     kubectl get nodes
    
     NAME                    STATUS    AGE       VERSION
     k8s-agent-25202797-0    Ready     44s       v1.9
     k8s-master-25202797-0   Ready     52s       v1.9
    

Deploying Portworx

  1. Download the following file:

     VER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}')
     curl -o px-spec.yaml "http://install.portworx.com?c=$DNS_NAME&k=etcd://10.240.0.4:2379&kbver=$VER"
     kubectl apply -f px-spec.yaml
    
     # Monitor the portworx pods
    
     kubectl get pods -o wide -n kube-system -l name=portworx
    
    
     # Monitor Portworx cluster status
    
     PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
     kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status
    

Adding a new PVC