Last Updated: 2020-06-06

Docker

Docker is a tool that allows developers, sys-admins etc. to easily deploy their applications in a sandbox (called containers) to run on the host operating system i.e. Linux. The key benefit of Docker is that it allows users to package an application with all of its dependencies into a standardized unit for software development. Unlike virtual machines, containers do not have high overhead and hence enable more efficient usage of the underlying system and resources.

Containers

The industry standard today is to use Virtual Machines (VMs) to run software applications. VMs run applications inside a guest Operating System, which runs on virtual hardware powered by the server's host OS.

VMs are great at providing full process isolation for applications: there are very few ways a problem in the host operating system can affect the software running in the guest operating system, and vice-versa. But this isolation comes at great cost — the computational overhead spent virtualizing hardware for a guest OS to use is substantial.

Containers take a different approach: by leveraging the low-level mechanics of the host operating system, containers provide most of the isolation of virtual machines at a fraction of the computing power.

Containers offer a logical packaging mechanism in which applications can be abstracted from the environment in which they actually run. This decoupling allows container-based applications to be deployed easily and consistently, regardless of whether the target environment is a private data center, the public cloud, or even a developer's personal PC. This gives developers the ability to create predictable environments that are isolated from the rest of the applications and can be run anywhere.

From an operations standpoint, apart from portability containers also give more granular control over resources giving your infrastructure improved efficiency which can result in better utilization of your compute resources.

Terminology

Before we go further, let's clarify some terminology that is used frequently in the Docker ecosystem.

What you'll build

In this lab you have hands-on experience with building and deploying your own webapps on the Cloud. We'll be using Google Cloud Platform to deploy a webapp on Cloud Run.

What you'll need

Prior experience in developing web applications will be helpful but is not required. As we proceed further along the lab, we'll make use of a few cloud services.

Install Docker

The getting started guide on Docker has detailed instructions for setting up Docker on Mac, Linux and Windows.

Once you are done installing Docker, test your Docker installation by running the following:

$ docker run hello-world

Hello from Docker.
This message shows that your installation appears to be working correctly.

Images

Now that we have everything setup, it's time to run some containers. In this section, we are going to run a Busybox container on our system and experiment with the docker run command.

To get started, let's run the following in our terminal:

$ docker pull busybox

The pull command fetches the busybox image from the Docker registry and saves it to our system. You can use the docker images command to see a list of all images on your system.:

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              018c9d7b792b        3 weeks ago         1.22MB

Docker Run

Let's now run a Docker container based on this image. To do that we are going to use the almighty docker run command.

$ docker run busybox
$

But nothing happened?! Did we do something wrong? Well, no. Behind the scenes, a lot of stuff happened. When you call run, the Docker client finds the image (busybox in this case), loads up the container and then runs a command in that container. When we ran docker run busybox, we didn't provide a command, so the container booted up, ran an empty command and then exited. Let's try fixing it:

$ docker run busybox echo "hello from busybox"
hello from busybox

In this case, the Docker client ran the echo command in our busybox container and then exited it. If you've noticed, all of that happened pretty quickly. Imagine booting up a virtual machine, running a command and then killing it. Now you know why they say containers are fast! Ok, now it's time to see the docker ps command. The docker ps command shows you all containers that are currently running:

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Since no containers are running, we see a blank line. Let's try a more useful variant - docker ps -a:

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
d5726be4301e        busybox             "echo 'hello from bu..."   4 seconds ago       Exited (0) 2 seconds ago                        competent_bouman
3f8337b589df        busybox             "sh"                     4 minutes ago       Exited (0) 4 minutes ago                        quirky_carson
cfc1323e5d9a        hello-world         "/hello"                 17 minutes ago      Exited (0) 17 minutes ago                       hungry_shamir

So what we see above is a list of all containers that we ran. Do notice that the STATUS column shows that these containers exited a few minutes ago.

You're probably wondering if there is a way to run more than just one command in a container. Let's try that now:

$ docker run -it busybox sh
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # uptime
 06:52:28 up 49 min,  0 users,  load average: 0.73, 0.76, 0.80

Running the run command with the -it flags attaches us to an interactive tty in the container. Now we can run as many commands in the container as we want. Take some time to run your favorite commands.

The docker run command, and you would most likely use it the most often. It makes sense to spend some time getting comfortable with it. To find out more about run, use docker run --help to see a list of all flags it supports. As we proceed further, we'll see a few more variants of docker run.

Removing containers

We saw before that we can still see remnants of the container even after we've exited by running docker ps -a. If you execute docker run multiple times it will leave stray containers and eat up disk space. Hence, it is advised to clean up containers once you're done with them. To do that, you can run the docker rm command. Just copy the container IDs from docker ps -a and paste them alongside the command. Next command is just an example, containers ID will be different on your system.

$ docker rm cfc1323e5d9a 3f8337b589df
Cfc1323e5d9a
3f8337b589df

On deletion, you should see the IDs echoed back to you. If you have a bunch of containers to delete in one go, copy-pasting IDs can be tedious. In that case, you can simply run:

$ docker rm $(docker ps -a -q -f status=exited)

This command deletes all containers that have a status of exited. In case you're wondering, the -q flag, only returns the numeric IDs and -f filters output based on conditions provided. One last thing that'll be useful is the --rm flag that can be passed to docker run which automatically deletes the container once it's exited from. For one off docker runs, --rm flag is very useful.

In latest versions of Docker, the docker container prune command can be used to achieve the same effect.

$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
7dd2ac008db192d57bec99b403e87f724f25fb0a67e08b0cb7ff541e1f475838
d5726be4301e3f455b7bcb91a0b46471e7cc67793e068f30ef4608f9d403cd19
3f8337b589df72efa417abc083c65137dd29215a07b1e5031c0cd0c52779c8c5
cfc1323e5d9aa7efb3b2b92cb517feb52c06122c017c0b2c70f0f1fb71b1d4b7

Total reclaimed space: 15B

Lastly, you can also delete images that you no longer need by running docker rmi.

We've looked at images before, but in this section we'll dive deeper into what Docker images are and build our own image! Lastly, we'll also use that image to run our application locally and finally deploy on AWS! Let's get started.

Images basics

Docker images are the basis of containers. In the previous example, we pulled the Busybox image from the registry and asked the Docker client to run a container based on that image. To see the list of images that are available locally, use the docker images command. Output may vary depending on what images you might already have.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
busybox             latest              018c9d7b792b        3 weeks ago         1.22MB
bitnami/mongodb     latest              0854467d36b7        5 weeks ago         410MB
hello-world         latest              bf756fb1ae65        7 months ago        13.3kB

The above gives a list of images that I've pulled from the registry, along with ones that you've created yourself (we'll shortly see how). The TAG refers to a particular snapshot of the image and the IMAGE ID is the corresponding unique identifier for that image.

For simplicity, you can think of an image registry akin to a git repository - images can be committed with changes and have multiple versions. If you don't provide a specific version number, the client defaults to latest. For example, you can pull a specific version of ubuntu image:

$ docker pull ubuntu:20.04

To get a new Docker image you can either get it from a registry (such as the Docker Hub) or create your own. There are tens of thousands of images available on Docker Hub. You can also search for images directly from the command line using docker search.

An important distinction to be aware of when it comes to images is the difference between base and child images.

Then there are official and user images, which can be both base and child images.

Dockerfile

A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process. The best part is that the commands you write in a Dockerfile are almost identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own dockerfiles.

The application directory does contain a Dockerfile but since we're doing this for the first time, we'll create one from scratch. To start, create a new empty directory that we'll use to create the image and change to it.

$ mkdir docker-example
$ cd docker-example

Next, create a blank file in your favorite text-editor and save it in the folder created previously and name it server.js. Copy the following code in the file:

const http = require('http');

const server = http.createServer((req, res) => {
  const ip = (req.headers['x-forwarded-for'] || '').split(',').pop() || req.connection.remoteAddress || req.socket.remoteAddress || req.connection.socket.remoteAddress;
  res.writeHead(200, {"Content-Type": "text/plain"});
  res.end(`Hello, ${ip}!\n`);
});

server.listen(8080);

console.log("Server running at http://127.0.0.1:8080/");

process.on('SIGINT', function() {
  process.exit();
});

It is a simple application that echoes "Hello, <ip>".

Finally, create a new blank file and save it in the same folder as the NodeJS app by the name of Dockerfile.

We start with specifying our base image. Use the FROM keyword to do that -

FROM node:8-alpine

The next step usually is to write the commands of copying the files and installing the dependencies. First, we set a working directory and then copy all the files for our app.

# set a directory for the app
WORKDIR /usr/src/app

# copy app to the container
COPY server.js .

Now that we have the file(s), we can install the dependencies, but as the app is pretty simple we don't have to do it.

The next thing we need to specify is the port number that needs to be exposed. Since our NodeJS app is running on port 8080, that's what we'll indicate.

EXPOSE 8080

The last step is to write the command for running the application, which is simply - node ./server.js. We use the CMD command to do that -

CMD ["node", "server.js"]

The primary purpose of CMD is to tell the container which command it should run when it is started. With that, our Dockerfile is now ready. This is how it looks -

FROM node:8-alpine

# set a directory for the app
WORKDIR /usr/src/app

# copy app to the container
COPY server.js .

# define the port number the container should expose
EXPOSE 8080

# run the command
CMD ["node", "server.js"]

Now that we have our Dockerfile, we can build our image. The docker build command does the heavy-lifting of creating a Docker image from a Dockerfile.

The section below shows you the output of running the same. Before you run the command yourself (don't forget the period), make sure to replace placeholder your_username with yours. This username should be the same one you created when you registered on Docker hub. If you haven't done that yet, please go ahead and create an account. The docker build command is quite simple - it takes an optional tag name with -t and a location of the directory containing the Dockerfile (the period means current directory).

$ docker build -t your_username/example .
Sending build context to Docker daemon  3.072kB
Step 1/5 : FROM node:8-alpine
8-alpine: Pulling from library/node
e6b0cf9c0882: Pull complete 
93f9cf0467ca: Pull complete 
a564402f98da: Pull complete 
b68680f1d28f: Pull complete 
Digest: sha256:38f7bf07ffd72ac612ec8c829cb20ad416518dbb679768d7733c93175453f4d4
Status: Downloaded newer image for node:8-alpine
 ---> 2b8fcdc6230a
Step 2/5 : WORKDIR /usr/src/app
 ---> Running in 7cf3ce9fea22
Removing intermediate container 7cf3ce9fea22
 ---> caa4eacc9395
Step 3/5 : COPY server.js .
 ---> 3e1f6e358719
Step 4/5 : EXPOSE 8080
 ---> Running in 499dbd48a716
Removing intermediate container 499dbd48a716
 ---> 80b163503191
Step 5/5 : CMD ["node", "server.js"]
 ---> Running in f7cdf9a7dcf6
Removing intermediate container f7cdf9a7dcf6
 ---> a06f4765d655
Successfully built a06f4765d655

If you don't have the node:8-alpine image, the client will first pull the image and then create your image. If everything went well, your image should be ready! Run docker images and see if your image shows.

The last step in this section is to run the image and see if it actually works (replacing placeholder username with yours).

$ docker run --rm -p 8080:8080 your_username/example
Server running at http://127.0.0.1:8080/

The command we just ran used port 8080 (after the colon (:)) for the server inside the container and exposed this externally on port 8080 (before the colon). Head over to the URL with port 8080, where your app should be live.

Docker push

The next thing we can do is to publish our image on a registry which can be accessed through the Internet. There are many different Docker registries you can use (you can even host your own). For now, let's use Docker Hub to publish the image.

If this is the first time you are pushing an image, the client will ask you to login. Provide the same credentials that you used for logging into Docker Hub.

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: your_username
Password: 
WARNING! Your password will be stored unencrypted in /home/your_username/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

To publish, just type the below command remembering to replace the name of the image tag above with yours. It is important to have the format of your_username/image_name so that the client knows where to publish.

$ docker push your_username/example

Once that is done, you can view your image on Docker Hub.

Now that your image is online, anyone who has docker installed can play with your app by typing just a single command - docker run -p 8080:8080 your_username/example

If you've spent countless hours setting up local dev environments / sharing application configuration in the past, you very well know how awesome this sounds.

We're going to use GCP Cloud Run to get our application up and running in a few clicks. We'll also see how easy it is to make our application scalable and manageable with Cloud Run!

Open CloudShell

You will do most of the work from the Google Cloud Shell, a command line environment running in the Cloud. This virtual machine is loaded with all the development tools you'll need (docker, gcloud, kubectl and others) and offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Open the Google Cloud Shell by clicking on the icon on the top right of the screen:

You should see the shell prompt at the bottom of the window:

Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.

GCP Cloud Run

Cloud Run allows you to write code your way by deploying any container that listens for requests or events. Build applications in your favorite language, with your favorite dependencies and tools, and deploy them in seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously — depending on traffic. Cloud Run only charges you for the exact resources you use. It's fully integrated with Cloud Code, Cloud Build, Cloud Monitoring, and Cloud Logging for an enhanced end-to-end developer experience.

Built upon an open standard Knative, enabling the portability of your applications.

There are two approaches for deploying to Cloud Run:

First, for a managed environment we need to move our image to the private Google Container Registry. In Cloud Shell run:

$ sudo docker pull your_username/example

Now, re-tag your image:

$ sudo docker image tag your_username/example gcr.io/${GOOGLE_CLOUD_PROJECT}/example

Finally, configure access and push your image to GCR:

$ sudo gcloud auth configure-docker
$ sudo docker push gcr.io/${GOOGLE_CLOUD_PROJECT}/example

Run the following command to deploy your app:

$ gcloud run deploy mycoolapp \
  --image gcr.io/${GOOGLE_CLOUD_PROJECT}/example \
  --platform managed \
  --region europe-west2 \
  --allow-unauthenticated

Wait a few moments until the deployment is complete. When it's done, the command line displays the service URL.

Service [mycoolapp] revision [mycollapp-00001-yib] has been deployed and is serving 100 percent of traffic.
Service URL: https://mycoolapp-HASH-nw.a.run.app

You can now visit your deployed container by opening the service URL in a web browser.

Cloud Run automatically and horizontally scales up your container image to handle the received requests, then scales down when demand decreases. You only pay for the CPU, memory, and networking consumed during request handling.

To check your service status, in Google Cloud console navigate to Compute → Cloud Run and you should see your mycoolapp service listed.

By default, a Cloud Run app will have a concurrency value of 80, meaning that each container instance will serve up to 80 requests at a time. That's a big departure from the functions as a service (FaaS) model, in which one instance handles one request at a time.

Redeploy the same container image with a concurrency value of 1 (only for testing purposes).

$ gcloud run deploy mycoolapp \
  --image gcr.io/${GOOGLE_CLOUD_PROJECT}/example \
  --platform managed \
  --region europe-west2 \
  --allow-unauthenticated \
  --concurrency 1

In Google Cloud console navigate to Compute → Cloud Run and click your mycoolapp service listed. Then, go to the Revisions tab.

You should see two revisions created. Click mycoolapp-00002 and review the details. You should see the concurrency value reduced to 1.

By default, the latest revision will be assigned 100% of the inbound traffic for a service. It's possible to use routes to allocate different percentages of traffic to different revisions in a service.

For example, we can split traffic 50/50 for our two revisions. To do it run:

$ gcloud run services update-traffic mycoolapp \
  --to-revisions mycoolapp-00001-REVISION_SUFFIX=50,mycoolapp-00002-REVISION_SUFFIX=50 \
  --platform managed \
  --region europe-west2

If you refresh the Cloud console you will see the traffic spit.

Overview

Cloud Endpoints is an API management system that helps you secure, monitor, analyze, and set quotas on your APIs. After you deploy your API to Endpoints, you can use the Endpoints Developers Portal to create a developer portal, a website that users of your API can access to view documentation and interact with your API.

Endpoints uses the Extensible Service Proxy V2 (ESPv2), an Envoy-based high-performance, scalable proxy, as an API gateway. To provide API management for Cloud Run, you need to deploy a prebuilt ESPv2 container to Cloud Run.

With this set up, ESPv2 intercepts all requests to your services and performs any necessary checks before invoking the service.

Reserve a hostname

First, you need to reserve a Cloud Run hostname for the ESPv2 service in order to configure the OpenAPI document. To reserve a hostname, all we need to do is to deploy a sample container to Cloud Run that we will replace with the actual ESPv2 container after all the necessary configuration.

$ gcloud run deploy endpoints \
  --image gcr.io/cloudrun/hello \
  --platform managed \
  --region europe-west2 \
  --allow-unauthenticated

You should now have 2 services in your Cloud Run.

In the previous command output or by clicking the endpoints service in Web Console, make note of the Cloud Run hostname for your newly deployed service.

Create an OpenApi Document

To deploy your API you have to create an OpenAPI document based on OpenAPI Specification v2.0 that describes your backend service and any authentication requirements. You also need to add a Google-specific field that contains the URL for each service so that ESPv2 has the information it needs to invoke a service.

Create a file named coolapi.yaml and open it in editor:

$ touch coolapi.yaml
$ cloudshell edit coolapi.yaml

Our backend service has only one endpoint that returns a plain text string, so our configuration is going to be very simple:

swagger: '2.0'
info:
  title: Super Cool API
  description: This is a very complex application
  version: 1.0.0
host: endpoints-HASH-nw.a.run.app
schemes:
  - https
produces:
  - text/plain
x-google-backend:
  address: https://mycoolapp-HASH-nw.a.run.app
  protocol: h2
paths:
  /:
    get:
      summary: Greet user with IP
      operationId: index
      responses:
        '200':
          description: A successful response
          schema:
            type: string

Deploy the Endpoints Configuration

First, upload the configuration and create a managed service.

$ gcloud endpoints services deploy coolapi.yaml

This creates a new Endpoints service with the title & name that you specified in the title & host fields, respectively, of the YAML file. The service is configured according to your OpenAPI document.

When the deployment completes, a message similar to the following is displayed:

Service Configuration [2020-10-13r0] uploaded for service [endpoints-HASH-nw.a.run.app]

Go to the Endpoints Services:

Cloud Endpoints

And click on your service name to check its current configuration, metrics and deployment history. You can find your Endpoints configuration ID in the deployment history tab, if you forget it. By clicking the ID, you will see its configuration details.

Build and deploy an ESPv2 Proxy image

The service config must be built into a new docker image because Cloud Run is a scale-to-zero service. That means when there is no traffic, all instances are deleted. When traffic increases, new instances are created on-demand. If the docker image did not have the service config built into it, Cloud Run would have to make two API calls to Google ServiceManagement at cold start. These calls count against your Google ServiceManagement API quota.

By building the service config into the ESPv2 docker image, you remove the two API calls and avoid impacting the quota count for the Google ServiceManagement service. This step also speeds up the new service instance creation and reduces the latency for the requests waiting for the new instance.

There is already a convenience script created to build the image. First, download it:

$ wget https://raw.githubusercontent.com/GoogleCloudPlatform/esp-v2/master/docker/serverless/gcloud_build_image
$ chmod +x gcloud_build_image

Then run it, make sure you substitute the CONFIG_ID and CLOUD_RUN_HOST with yours (don't forget that hostname should not include protocol, i.e., endpoints-HASH-nw.a.run.app):

$ ./gcloud_build_image -s CLOUD_RUN_HOST \
    -c CONFIG_ID -p ${GOOGLE_CLOUD_PROJECT}

The script uses the gcloud command to download the service config, build the service config into a new ESPv2 image, and upload the new image to your project container registry.

The script automatically uses the latest release of ESPv2, denoted by the ESP_VERSION in the output image name.

Make note of the created image in the command output, that should look like this:

gcr.io/PROJECT_ID/endpoints-runtime-serverless:ESP_VERSION-CLOUD_RUN_HOST-CONFIG_ID

Deploy the ESPv2 Cloud Run service with the new image you built with the same Cloud Run service name you used when you originally reserved the hostname.

$ gcloud run deploy endpoints \
  --image gcr.io/PROJECT_ID/endpoints-runtime-serverless:ESP_VERSION-CLOUD_RUN_HOST-CONFIG_ID \
  --platform managed \
  --region europe-west2 \
  --allow-unauthenticated

...
Service URL: https://endpoints-HASH-nw.a.run.app

Open the URL in the browser and refresh the page like 10 times. Then go to the Endpoints Console and check your API details.

Cloud Endpoints

Wait around ~5 minutes for metrics to show up (might need to refresh the page).

Next, go to the Developer Portal.

Dev Portal

And click the Create Portal button.

When the portal is created - you should see something similar to the following:

Click the link to explore your portal's UI (use your Google account to login).

While Cloud Run doesn't charge when the service isn't in use, you might still be charged for storing the container image in GCR.

Delete the container images. Make sure you replace the tag of the ESPv2 image.:

$ gcloud container images delete gcr.io/${GOOGLE_CLOUD_PROJECT}/example

$ gcloud container images delete gcr.io/${GOOGLE_CLOUD_PROJECT}/endpoints-runtime-serverless:ESP_VERSION-CLOUD_RUN_HOST-CONFIG_ID

Delete the Endpoints Service:

$ gcloud endpoints services delete `gcloud endpoints services list --format="value(serviceName)"`

Go to the Developer Portal

Dev Portal

And click the delete button:

To delete the Cloud Run services, use this command:

$ gcloud run services delete mycoolapp \
  --platform managed \
  --region europe-west2
$ gcloud run services delete endpoints \
  --platform managed \
  --region europe-west2