Last Updated: 2020-08-28

Helm

Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

For a typical cloud-native application with a 3-tier architecture, the diagram below illustrates how it might be described in terms of Kubernetes objects. In this example, each tier consists of a Deployment and Service object, and may additionally define ConfigMap or Secret objects. Each of these objects are typically defined in separate YAML files, and are fed into the kubectl command line tool.

A Helm chart encapsulates each of these YAML definitions, provides a mechanism for configuration at deploy-time and allows you to define metadata and documentation that might be useful when sharing the package. Helm can be useful in different scenarios:

What you'll build

In this codelab, you're going to deploy a simple application using a Helm chart to your Kubernetes.

What you'll need

Create an Account

In this step, you register for the Google Cloud Platform free trial and create a project. The free trial provides you:

To register for the free trial open the free trial Registration page.

If you do not have a Gmail account, follow the steps to create one. Otherwise, login and complete the registration form.

Read and agree to the terms of service. Click Accept and start a free trial.

Create a Project

Next, create your first project using the Google Cloud Platform Console. The project is used to complete the rest of the lab.

To create a project in the Google Cloud Platform Console, click Select a project > Create a project.

In the New Project dialog: for Project name, type whatever you like. Make a note of the Project ID in the text below the project name box; you need it later. Then click Create.

Upgrade Account (Optional)

In the upper-right corner of the console, a button will appear asking you to upgrade your account. Click Create a Project when you see it. If the Upgrade button does not appear, you may skip this step. If the button appears later, click it when it does.

When you upgrade your account, you immediately have access to standard service quotas, which are higher than those available on the free trial.

Finalize

On the GCP Console, use the left-hand side menu to navigate to Compute Engine and ensure that there are no errors.

At the end of this lab, you may delete this project and close your billing account if desired.

Before you can use Kubernetes to deploy your application, you need a cluster of machines to deploy them to. The cluster abstracts the details of the underlying machines you deploy to the cluster.

Machines can later be added, removed, or rebooted and containers are automatically distributed or re-distributed across whatever machines are available in the cluster. Machines within a cluster can be set to autoscale up or down to meet demand. Machines can be located in different zones for high availability.

Open CloudShell

You will do most of the work from the Google Cloud Shell, a command line environment running in the Cloud. This virtual machine is loaded with all the development tools you'll need (docker, gcloud, kubectl and others) and offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Open the Google Cloud Shell by clicking on the icon on the top right of the screen:

You should see the shell prompt at the bottom of the window:

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.

Run the following command in Cloud Shell to confirm that you are authenticated

gcloud auth list

If it's the first time you are running Cloud Shell - authorize it.

You might need to run the command again after authorization. Command output:

 Credentialed Accounts
ACTIVE  ACCOUNT
*       <my_account>@<my_domain.com>

To set the active account, run:
    $ gcloud config set account `ACCOUNT`

Check if your project is set correctly.

gcloud config list project

Command output

[core]
project = <PROJECT_ID>

If it is not, you can set it with this command:

gcloud config set project <PROJECT_ID>

Create a Cluster

Enter the following command to create a cluster of machines.

$ export PROJECT_ID=$(gcloud config get-value project)
$ gcloud container clusters create devops-cluster --zone "europe-west1-b" \
--num-nodes 3 --machine-type=e2-medium \
--project=${PROJECT_ID} --enable-ip-alias \
--scopes=gke-default,cloud-platform

When the cluster is ready, go to the Kubernetes Engine page in the management console and you should see it.

Kubernetes Engine

Create Example Kubernetes Manifest

Writing a Helm Chart is easier when you're starting with an existing set of Kubernetes manifests.

Create and change to the demo directory.

$ mkdir ~/demo-charts/
$ cd ~/demo-charts/

Let's create working manifests :) Create a deployment.yaml:

$ touch deployment.yaml
$ cloudshell edit deployment.yaml

And paste the following:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: example
  name: example
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - image: nginx:1.19.4-alpine
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}

Same with the service.yaml. Create it:

$ touch service.yaml
$ cloudshell edit service.yaml

And paste the following:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: example
  name: example
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: example
  type: LoadBalancer

Deploy it into the cluster:

$ kubectl create -f ~/demo-charts/

Verify that the NGINX is working (might need to wait a minute or two):

$ NGINX_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" svc example)
$ curl $NGINX_IP

If all is going well you should be able to hit the provided URL and get the "Welcome to nginx!" page. We can use these manifests to bootstrap our helm charts:

$ tree ~/demo-charts

/home/user/demo-charts
├── deployment.yaml
└── service.yaml

Before we move on we should clean up our environment:

$ kubectl delete -f ~/demo-charts

deployment.apps "example" deleted
service "example" deleted

Generate the Chart

The best way to get started with a new chart is to use the helm create command to scaffold out an example we can build on. Use this command to create a new chart named mychart in our demo directory:

$ helm create mychart

Helm will create a new directory called mychart with the structure shown below. Let's find out how it works.

$ tree mychart/

mychart/
|-- charts
|-- Chart.yaml
|-- templates
|   |-- deployment.yaml
|   |-- _helpers.tpl
|   |-- hpa.yaml
|   |-- ingress.yaml
|   |-- NOTES.txt
|   |-- serviceaccount.yaml
|   |-- service.yaml
|   `-- tests
|       `-- test-connection.yaml
`-- values.yaml

Helm created a number of files and directories.

As mentioned earlier, a Helm chart consists of metadata that is used to help describe what the application is, define constraints on the minimum required Kubernetes and/or Helm version and manage the version of your chart. All of this metadata lives in the Chart.yaml file. The Helm documentation describes the different fields for this file.

Edit Chart.yaml:

$ cloudshell edit mychart/Chart.yaml

And change it so that it looks like this:

apiVersion: v2
name: mychart
description: NGINX chart for Kubernetes

type: application
version: 0.1.0
appVersion: 1.19.4-alpine

Remove the currently unused *.yaml and NOTES.txt files and copy our example Kubernetes manifests to the templates folder.

$ rm ~/demo-charts/mychart/templates/*.yaml
$ rm ~/demo-charts/mychart/templates/NOTES.txt
$ cp ~/demo-charts/*.yaml ~/demo-charts/mychart/templates/

The file structure should look like this now:

$ tree mychart/

mychart/
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

Templates

The most important piece of the puzzle is the templates/ directory. This is where Helm finds the YAML definitions for your Services, Deployments and other Kubernetes objects. We already have replaced the generated YAML files for our own. What you end up with is a working chart that can be deployed using the helm install command.

It's worth noting however, that the directory is named templates, and Helm runs each file in this directory through a Go template rendering engine. Helm extends the template language, adding a number of utility functions for writing charts.

There are a number of built in variables we can use in our templates. Before we try them, edit the values.yaml:

$ cloudshell edit ~/demo-charts/mychart/values.yaml

And replace it with the following:

replicaCount: 1

image:
  repository: nginx
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 80

resources: 
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

Now we can use a number of built-in objects to customize our manifests. First, open the service.yaml:

$ cloudshell edit ~/demo-charts/mychart/templates/service.yaml

As we now have some parameters inside our values.yaml file, we can use the built-in .Values object to access them in the template. The .Values object is a key element of Helm charts, used to expose configuration that can be set at the time of deployment (the defaults for this object are defined in the values.yaml). The use of templating can greatly reduce boilerplate and simplify your definitions. Add the Values.service.port and Values.service.type variables to our template:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: example
  name: example
  namespace: default
spec:
  ports:
  - port: {{ .Values.service.port }}
    protocol: TCP
    targetPort: 80
  selector:
    app: example
  type: {{ .Values.service.type }}

To see if the variables are working, we can run helm template to run the templating engine and generate manifests. But it will generate a manifest for every file in the templates folder. To limit it to a specific file we can use a -s flag:

$ helm template mychart/ -s templates/service.yaml

---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: example
  name: example
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: example
  type: ClusterIP

You should see the port and type substituted by the variables in values.yaml.

Next, our service has its name hard-coded to "example". If we use helm to deploy this chart multiple times in the same namespace - we will rewrite this configuration. To make every object unique during a helm install we can make use of partials defined in _helpers.tpl, as well as functions like replace. The Helm documentation has a deeper walkthrough of the templating language, explaining how functions, partials and flow control can be used when developing your chart. We can include a partial called mychart.fullname, which will generate a name consisting of chat name and release name. To add it - modify the service.yaml:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: example
  name: {{ include "mychart.fullname" . }}
  namespace: default
spec:
  ports:
  - port: {{ .Values.service.port }}
    protocol: TCP
    targetPort: 80
  selector:
    app: example
  type: {{ .Values.service.type }}

Once again, we have hard-coded label selectors (and labels for that matter), which will cause problems if we install it multiple times. To fix it, for example, we can use a built-in Release object, that contains information about current release, and use its name as a label & selector (we'll do the same in deployment later).

apiVersion: v1
kind: Service
metadata:
  labels:
    app: {{ .Release.Name }}
  name: {{ include "mychart.fullname" . }}
  namespace: default
spec:
  ports:
  - port: {{ .Values.service.port }}
    protocol: TCP
    targetPort: 80
  selector:
    app: {{ .Release.Name }}
  type: {{ .Values.service.type }}

If we run template again, we'll see that the variable is substituted by RELEASE-NAME placeholder.

$ helm template mychart/ -s templates/service.yaml

---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: RELEASE-NAME
  name: RELEASE-NAME-mychart
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: RELEASE-NAME
  type: ClusterIP

It happens because a release name is chosen (or generated) during an installation process and is not available right now. We can fix it by using additional flag, but instead, we can do a dry-run of a helm install and enable debug to inspect the generated definitions:

$ helm install --dry-run --debug test ./mychart

NAME: test
(...)
COMPUTED VALUES: ...
(...)
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: test
  name: test-mychart
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: test
  type: ClusterIP
---
(...)

Also, if a user of your chart wanted to change the default configuration, they could provide overrides directly on the command-line:

$ helm install --dry-run --debug test ./mychart --set service.port=8080

(...)
---
# Source: mychart/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: test
  name: test-mychart
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: test
  type: ClusterIP
---
(...)

For more advanced configuration, a user can specify a YAML file containing overrides with the --values option.

Next, let's modify our deployment to use the rest of the variables as well as showcase additional functions and pipelines. Also, the .Chart object provides metadata about the chart to your definitions such as the name, or version.

Edit the deployment.yaml:

$ cloudshell edit ~/demo-charts/mychart/templates/deployment.yaml

And edit it to look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Release.Name }}
  name: {{ include "mychart.fullname" . }}
  namespace: default
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app: {{ .Release.Name }}
    spec:
      containers:
      - image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        name: {{ .Chart.Name }}
        resources:
          {{- toYaml .Values.resources | nindent 10 }}

Run the dry run again and check if the templating is working:

$ helm install --dry-run --debug test ./mychart
(...)
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test
  name: test-mychart
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - image: "nginx:1.19.4-alpine"
        imagePullPolicy: IfNotPresent
        name: mychart
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi

Documentation

Another useful file in the templates/ directory is the NOTES.txt file. This is a templated, plaintext file that gets printed out after the chart is successfully deployed. As we'll see when we deploy our first chart, this is a useful place to briefly describe the next steps for using a chart. Since NOTES.txt is run through the template engine, you can use templating to print out working commands for obtaining an IP address, or getting a password from a Secret object.

Let's create a new NOTES.txt:

$ touch mychart/templates/NOTES.txt
$ cloudshell edit mychart/templates/NOTES.txt

And paste the following:

Get the application URL by running these commands:
{{- if contains "LoadBalancer" .Values.service.type }}
  It may take a few minutes for the LoadBalancer IP to be available.
  export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "mychart.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
  curl http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
  export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:80
{{- else }}
  Please use either LoadBalancer or ClusterIP Service type.
{{- end }}

Install the application

The chart you created in the previous step is set up to run an NGINX server exposed via a Kubernetes Service. By default, the chart will create a ClusterIP type Service, so NGINX will only be exposed internally in the cluster. To access it externally, we'll use the LoadBalancer type instead. We can also set the name of the Helm release so we can easily refer back to it. Let's go ahead and deploy our NGINX chart using the helm install command:

$ helm install example ./mychart --set service.type=LoadBalancer

NAME: example
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
Get the application URL by running these commands:
  It may take a few minutes for the LoadBalancer IP to be available.
  export SERVICE_IP=$(kubectl get svc --namespace default example-mychart --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
  echo http://$SERVICE_IP:80

The output of helm install displays a handy summary of the state of the release, what objects were created, and the rendered NOTES.txt file to explain what to do next. Run the commands in the output to get a URL to access the NGINX service and curl it (you probably need to wait 1-2 minutes for LoadBalancer to get its IP address).

$ export SERVICE_IP=$(kubectl get svc --namespace default example-mychart --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
$ curl http://$SERVICE_IP:80/

If all went well, you should see the HTML output of a NGINX welcome page. Congratulations! You've just deployed your very first service packaged as a Helm chart!

Let's check what resources we have currently deployed:

$ helm get manifest example | kubectl get -f -

NAME                      TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
service/example-mychart   LoadBalancer   10.4.x.xxx   xx.xxx.xxx.xx   80:xxxxx/TCP   2m47s

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/example-mychart   1/1     1            1           2m47s

Package

So far, we've been using the helm install command to install a local, unpacked chart. However, if you are looking to share your charts with your team or the customer, your consumers will typically install the charts from a tar package. We can use helm package to create the tar package:

$ helm package ./mychart

Helm will create a mychart-0.1.0.tgz package in our working directory, using the name and version from the metadata defined in the Chart.yaml file. A user can install from this package instead of a local directory by passing the package as the parameter to helm install.

$ helm install example2 mychart-0.1.0.tgz --set service.type=LoadBalancer

Repositories

In order to make it much easier to share packages, Helm has built-in support for installing packages from an HTTP server. Helm reads a repository index hosted on the server which describes what chart packages are available and where they are located.

We can use the ChartMuseum opensource Helm repository server to run a local repository to serve our chart. It can use different storage backends (S3, GCS, etc.), but we'll use local storage as an example. First, install the ChartMuseum:

$ curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
$ chmod +x ./chartmuseum
$ sudo mv ./chartmuseum /usr/local/bin

Next, run the following command to launch our server in current shell:

$ chartmuseum --port=8080 \
  --storage="local" \
  --storage-local-rootdir="./chartstorage"

Now you can upload the Chart to the repository. Open a new Cloud Shell tab:

$ cd ~/demo-charts
$ curl --data-binary "@mychart-0.1.0.tgz" http://localhost:8080/api/charts

{"saved":true}

To install the chart from repository we need to add it to Helm repository list and index it:

$ helm repo add local http://localhost:8080
$ helm repo update

Now you should be able to see your chart in the local repository and install it from there:

$ helm search repo local
NAME            CHART VERSION   APP VERSION     DESCRIPTION
local/mychart   0.1.0           1.19.4-alpine   NGINX chart for Kubernetes

$ helm install example3 local/mychart --set service.type=LoadBalancer

To set up a remote repository you can follow the guide in the Helm documentation.

Dependencies

As the applications that you're packaging as charts increase in complexity, you might find you need to pull in a dependency such as a database. Helm allows you to specify sub-charts that will be created as part of the same release. To define a dependency, edit the Chart.yaml file in the chart root directory:

$ cloudshell edit mychart/Chart.yaml

And add the following:

dependencies:
- name: mariadb
  version: 9.1.4
  repository: https://charts.bitnami.com/bitnami

Much like a runtime language dependency file (such as Python's requirements.txt), the dependencies list in Chart.yaml file allows you to manage your chart's dependencies and their versions. When updating dependencies, a lockfile is generated so that subsequent fetching of dependencies uses a known, working version. Run the following command to pull in the MariaDB dependency we defined:

$ helm dep update ./mychart

...
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading mariadb from repo https://charts.bitnami.com/bitnami
Deleting outdated charts

Helm has found a matching version in the bitnami repository and has fetched it into my chart's sub-chart directory.

$ ls ./mychart/charts

mariadb-9.1.4.tgz

Now when we go and install the chart, we'll see that MariaDB's objects are created too. Also, we are able to pass values overrides for dependencies the same way we pass value overrides for our chart:

$ helm install example4 ./mychart --set service.type=LoadBalancer --set mariadb.auth.rootPassword=verysecret
$ helm get manifest example4 | kubectl get -f -

Check if the password override worked:

$ echo `kubectl get secret example4-mariadb -o=jsonpath="{.data.mariadb-root-password}" | base64 -d`

Delete a Release

To list all the installed releases you can run:

$ helm ls

Now, you can delete every release by name:

for name in "example" "example2" "example3" "example4"
do
  /usr/local/bin/helm uninstall $name
done

Delete the cluster

$ gcloud container clusters delete devops-cluster --zone europe-west1-b

When the cluster is removed - delete the demo folder.

$ rm -r ~/demo-charts

Delete the local Helm repository:

$ helm repo remove local

Finally, check if there are any persistent disks left and delete them:

$ gcloud compute disks list --format="json" | jq .[].name | \
  grep "devops-cluster" | xargs gcloud compute disks delete --zone europe-west1-b

Thank you! :)