Last Updated: 2023-03-13
Istio (https://istio.io/) is an open service mesh that provides a uniform way to connect, manage, and secure microservices. It supports managing traffic flows between services, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code.
Istio gives you:
In this codelab, you're going to deploy a guestbook microservice app using Kubernetes and manage it with Istio. Your app will:
Before you can use Kubernetes to deploy your application, you need a cluster of machines to deploy them to. The cluster abstracts the details of the underlying machines you deploy to the cluster.
Machines can later be added, removed, or rebooted and containers are automatically distributed or re-distributed across whatever machines are available in the cluster. Machines within a cluster can be set to autoscale up or down to meet demand. Machines can be located in different zones for high availability.
You will do some of the work from the Amazon Cloud Shell, a command line environment running in the Cloud. This virtual machine is loaded with all the development tools you'll need (aws cli
, python
) and offers a persistent 1GB home directory and runs in AWS, greatly enhancing network performance and authentication. Open the Amazon Cloud Shell by clicking on the icon on the top right of the screen:
You should see the shell prompt open in the new tab:
Before creating a cluster, you must install and configure the following tools:
kubectl
– A command line tool for working with Kubernetes clusters.kops
– A command line tool for Kubernetes clusters in the AWS Cloud.To install kops
, run the following:
$ curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$ chmod +x ./kops
$ sudo mv ./kops /usr/local/bin/
Confirm the kops
command works:
$ kops version
To install kubectl
, run the following:
$ curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl
kops
needs a state store to hold the configuration for your clusters. The simplest configuration for AWS is to store it in a S3 bucket in the same account, so that's how we'll start.
Create an empty bucket, replacing YOUR_NAME
with your name 😁:
$ export REGION=us-east-1
$ export STATE_BUCKET=YOUR_NAME-state-store
# Create the bucket using awscli
$ aws s3api create-bucket \
--bucket ${STATE_BUCKET} \
--region ${REGION}
If the name is taken and you receive an error - change the name and try again.
Next, rather than typing the different command argument every time, it's much easier to export the KOPS_STATE_STORE
and NAME
variables to previously setup bucket name and a cluster name that ends with .k8s.local
, for example:
$ export NAME="mycoolcluster.k8s.local" #SHOULD END WITH .k8s.local
$ export KOPS_STATE_STORE="s3://${STATE_BUCKET}"
After that - generate a dummy SSH key for kops
to use:
$ ssh-keygen -b 2048 -t rsa -f ${HOME}/.ssh/id_rsa -q -N ""
Now you are ready to create the cluster. We're going to create a production ready high availability cluster with 3 masters and 4 nodes:
$ kops create cluster \
--name ${NAME} \
--state ${KOPS_STATE_STORE} \
--node-count 4 \
--master-count=3 \
--zones us-east-1a \
--master-zones us-east-1a,us-east-1b,us-east-1c \
--node-size t2.large \
--master-size t2.medium \
--master-volume-size=20 \
--node-volume-size=20 \
--networking flannel
When cluster configuration is ready, edit it:
$ kops edit cluster ${NAME}
In the editor - find the iam
section at the end of the spec
that looks like this:
...
spec:
...
iam:
allowContainerRegistry: true
legacy: false
...
Edit it so it looks like the next snippet:
...
spec:
...
iam:
allowContainerRegistry: true
legacy: false
permissionsBoundary: arn:aws:iam::ACCOUNT_ID_HERE:policy/CustomPowerUserBound
...
After saving the document, run the following set of commands:
$ kops update cluster ${NAME} --yes --admin
Wait 5-10 minutes till the cluster is ready. You can check its state by periodically running validate:
$ kops validate cluster --wait 10m
....
Your cluster mycoolcluster.k8s.local is ready
Start by cloning the repository for our Guestbook application.
$ cd ~/
$ git clone https://gitlab.com/DmyMi/aws-k8s-lab
Amazon EC2 Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository. With Amazon ECR, there are no upfront fees or commitments. You pay only for the amount of data you store in your repositories and data transferred to the Internet.
Export the required variables.:
$ export ACCOUNT_ID=`aws sts get-caller-identity --query "Account" --output text`
$ export REGION="us-east-1"
$ export REPO_PREFIX="your-name"
After that you can run the following command to create 3 repositories for the demo project. You can copy & paste it in the console.
for name in "ui" "message" "guestbook" "kpack"
do
/usr/local/bin/aws ecr create-repository \
--repository-name "${REPO_PREFIX}"-"${name}" \
--image-scanning-configuration scanOnPush=true \
--region "${REGION}"
done
To be able to build images inside the cluster and push to ECR we need to have an ECR access token.
Use AWS CLI to get the access token for AWS ECR in the us-east-1
region.
$ TOKEN=`aws ecr get-login-password --region ${REGION} | cut -d' ' -f6`
With the token available, we need to save it in the cluster for later access by our helper pods. As this token is used to access a Docker registry, we can use a special Kubernetes Secret type kubernetes.io/dockerconfigjson
that allows us to store Docker credentials in standard manner.
$ kubectl create secret docker-registry ecr-secret \
--docker-server=https://${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password="${TOKEN}"
As Cloud Shell doesn't have enough compute power and functionality to build Docker images we will use our cluster to do it. We will utilize kpack, which extends Kubernetes and utilizes unprivileged kubernetes primitives to provide builds of OCI images as a platform implementation of Cloud Native Buildpacks.
kpack
provides a declarative builder resource that configures a Cloud Native Buildpacks build configuration with the desired buildpack order and operating system stack.
kpack
provides a declarative image resource that builds an OCI image and schedules rebuilds on source changes and from builder buildpack and builder stack updates.
To install kpack
run the following command:
$ kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.10.1/release-0.10.1.yaml
To be able to deeper inspect several kpack
primitives we can use kpack-cli.
$ KP_VERSION=$(curl -s https://api.github.com/repos/vmware-tanzu/kpack-cli/releases/latest | grep tag_name | cut -d '"' -f 4)
$ curl -Lo kp https://github.com/vmware-tanzu/kpack-cli/releases/download/${KP_VERSION}/kp-linux-amd64-${KP_VERSION:1}
$ chmod +x ./kp
$ sudo mv ./kp /usr/local/bin/
To be able to use kpack
we need several Kubernetes resources (both basic and custom):
kpack
to build OCI images.kpack
configuration for a builder image that includes the stack and buildpacks needed to build an OCI image from your application source code. The Builder configuration will write to the registry using the credentials from the secret we've created and will reference the stack and store. The builder order will determine the order in which buildpacks are used in the builder.Use the following command to create these resources:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: kp-sa
secrets:
- name: ecr-secret
imagePullSecrets:
- name: ecr-secret
---
apiVersion: kpack.io/v1alpha2
kind: ClusterStore
metadata:
name: default
spec:
sources:
- image: gcr.io/paketo-buildpacks/java
---
apiVersion: kpack.io/v1alpha2
kind: ClusterStack
metadata:
name: base
spec:
id: "io.buildpacks.stacks.bionic"
buildImage:
image: "paketobuildpacks/build:base-cnb"
runImage:
image: "paketobuildpacks/run:base-cnb"
---
apiVersion: kpack.io/v1alpha2
kind: Builder
metadata:
name: builder
spec:
serviceAccountName: kp-sa
tag: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO_PREFIX}-kpack:builder
stack:
name: base
kind: ClusterStack
store:
name: default
kind: ClusterStore
order:
- group:
- id: paketo-buildpacks/java
EOF
Create Image resources to build our images. An image resource is the specification for an OCI image that kpack
should build and manage. We will have separate image resources for all our applications.
$ cat <<EOF | kubectl apply -f -
apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
name: ui-image
spec:
tag: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO_PREFIX}-ui:v1
serviceAccountName: kp-sa
builder:
name: builder
kind: Builder
source:
git:
url: https://gitlab.com/DmyMi/aws-k8s-lab.git
revision: master
build:
env:
- name: "BP_GRADLE_BUILT_MODULE"
value: "ui"
---
apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
name: message-image
spec:
tag: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO_PREFIX}-message:v1
serviceAccountName: kp-sa
builder:
name: builder
kind: Builder
source:
git:
url: https://gitlab.com/DmyMi/aws-k8s-lab.git
revision: master
build:
env:
- name: "BP_GRADLE_BUILT_MODULE"
value: "message"
---
apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
name: guestbook-image
spec:
tag: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO_PREFIX}-guestbook:v1
serviceAccountName: kp-sa
builder:
name: builder
kind: Builder
source:
git:
url: https://gitlab.com/DmyMi/aws-k8s-lab.git
revision: master
build:
env:
- name: "BP_GRADLE_BUILT_MODULE"
value: "guestbook"
EOF
You can check if images are built by running the following command:
$ kubectl get images
You can watch the build logs by using kp build logs IMAGE_NAME
, e.g.:
$ kp build logs ui-image
If everything went fine - you will see a list of images with URLs and READY=True
To download Istio CLI and release run the following:
$ cd ~/
$ export ISTIO_VERSION=1.17.1
$ curl -L https://istio.io/downloadIstio | sh -
$ sudo mv istio-${ISTIO_VERSION}/bin/istioctl /usr/bin/istioctl
We can install and manage Istio using istioctl
. We will use the demo
profile while enabling some of the additional features to showcase Istio.
First, create a configuration file:
$ cat <<EOF > ~/istio.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
meshConfig:
defaultConfig:
tracing:
zipkin:
address: zipkin.istio-system:9411
EOF
Then run the following command to install Istio:
$ istioctl install -f istio.yaml
You can confirm the Istio control plane services have been deployed with the following commands. Wait for ~1 minute before running:
$ kubectl get all -n istio-system
A lot has been installed!
Finally, add Zipkin for tracing, Prometheus & Grafana for monitoring, and Kiali for mesh visualization:
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/extras/zipkin.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/prometheus.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/grafana.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/kiali.yaml
Move into the Kubernetes examples directory.
$ cd ~/aws-k8s-lab/kubernetes/
You will be using the yaml
files in this directory. Every file describes a resource that needs to be deployed into Kubernetes.
To edit the repository names - execute the following script:
for name in "guestbook-deployment" "message-deployment" "ui-deployment"
do
/usr/bin/sed "s/AWS_ACCOUNT_ID/${ACCOUNT_ID}/g" "${name}".yaml.tpl | \
/usr/bin/sed "s/AWS_REPO_PREFIX/${REPO_PREFIX}/g" > "${name}".yaml
done
Examine the UI Deployment:
$ cat ui-deployment.yaml
Note that there is nothing but the application container specified.
In order for Istio to intercept the requests, Istio proxy must be installed as a sidecar alongside the application container. There are 2 ways to do this:
Use istioctl
to see what manual sidecar injection will add to the deployment.
$ istioctl kube-inject -f ui-deployment.yaml | less
...
spec:
containers:
- image: gcr.io/project/ui:v1
...
- args:
...
image: docker.io/istio/proxyv2:...
...
initContainers:
- args:
...
image: docker.io/istio/proxyv2:...
...
Notice that the output has more than just the application container. Specifically, it has an additional istio-proxy
container, and an init container.
The init container is responsible for setting up the IP table rules to intercept incoming and outgoing connections and directing them to the Istio Proxy. The istio-proxy
container is the Envoy proxy itself.
Instead of manual injection, you can also use Automatic Sidecar Injection. This works by using Kubernetes's Mutating Admission Webhook mechanism to intercept new Pod creations and automatically enhancing the Pod definition with Istio proxies.
$ kubectl get MutatingWebhookConfiguration istio-sidecar-injector -oyaml
This configuration points to the istiod
service in the istio-system
namespace, which is backed by pods managed by the istiod
deployment
$ kubectl -n istio-system get svc istiod
$ kubectl -n istio-system get deployment istiod
You can turn on Automatic Sidecar Injection at the Kubernetes namespace level, by setting the label istio-injection
to enabled
.
$ kubectl label namespace default istio-injection=enabled
Deploy the entire application in one shot.
$ kubectl apply -f mysql-stateful.yaml \
-f mysql-service.yaml \
-f redis-deployment.yaml \
-f redis-service.yaml \
-f guestbook-deployment.yaml \
-f guestbook-service.yaml \
-f message-deployment.yaml \
-f message-service.yaml \
-f ui-deployment.yaml \
-f ui-service.yaml
Check that all components have the Running
status, and that Ready column shows 2/2
.
$ watch kubectl get pods
When you are ready, Control+C out of the watch loop.
You can see the sidecar proxy injected into the pod.
$ kubectl get pods -l app=guestbook-service
NAME READY STATUS RESTARTS AGE
guestbook-service-7ff64d7f45-tbrk8 2/2 Running 0 47s
guestbook-service-7ff64d7f45-z758h 2/2 Running 0 47s
# Pick one of the pod from your list, and describe it.
$ kubectl describe pod guestbook-service-...
You should see the initialization containers, and well as a container named istio-proxy automatically injected into the pod.
All of the services now have an internal load balancer. In this lab, you'll expose the UI service via the Istio Ingress. Istio Ingress is not a Kubernetes Ingress controller. I.e., you won't configure Istio Ingress with Kubernetes Ingress definitions.
Find the Istio Ingress IP address.
$ kubectl get svc istio-ingressgateway -n istio-system
$ export INGRESS_IP=$(kubectl -n istio-system get svc istio-ingressgateway \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ echo $INGRESS_IP
Connect to the Istio Ingress IP.
$ curl $INGRESS_IP
curl: (7) Failed to connect to ... port 80: Connection refused
The connection is refused because nothing is binding to this ingress.
Bind a Gateway to the Istio Ingress.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: guestbook-ui-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
EOF
An Istio Gateway describes a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections. The specification describes a set of ports that should be exposed, the type of protocol to use, virtual host name to listen to, etc.
Curl the Istio Ingress IP again, and observed that it's now returning a 404 error.
$ curl -v http://$INGRESS_IP
It's returning a 404 error because nothing is binding to the Gateway yet.
Create a Virtual Service and bind it to the Gateway.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: ui
EOF
A Virtual Service defines the rules that control how requests for a service are routed within an Istio service mesh. For example, a virtual service can route requests to different versions of a service or to a completely different service than was requested. Requests can be routed based on the request source and destination, HTTP paths and header fields, and weights associated with individual service versions.
Find the Ingress IP again, and open it up in the browser.
$ echo http://$INGRESS_IP
And check the UI:
Let's test how our application deals with upstream errors. Inject a fault to make Guestbook Service reply HTTP 503 error 100% of the time.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-service
spec:
hosts:
- guestbook-service
http:
- route:
- destination:
host: guestbook-service
fault:
abort:
percentage:
value: 100.0
httpStatus: 503
EOF
Go back to the browser and refresh the UI page. You should see our application handles errors like a boss 😁. But posting a new message will fail with an error.
Simply delete the rule to restore traffic.
$ kubectl delete virtualservice guestbook-service
Let's try adding a 5 seconds delay to Message Service:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: message-service
spec:
hosts:
- message-service
http:
- route:
- destination:
host: message-service
fault:
delay:
percentage:
value: 100.0
fixedDelay: 5s
EOF
Try posting a new greeting on the UI page. Observe that now it takes about 5 seconds to complete due to injected delay. Delete the configuration:
$ kubectl delete virtualservice message-service
But what if we know the service might be overloaded or otherwise faulty and don't want to implement custom retry logic in the application? Use Istio proxy retry configuration that will automatically keep the connection open while the proxy tries to connect to the real service and retry if it fails.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: message-service
spec:
hosts:
- message-service
http:
- route:
- destination:
host: message-service
retries:
attempts: 3
perTryTimeout: 2s
EOF
Also, we can apply a traffic policy to control the volume of connections to our possibly overloaded service and provide a circuit breaker that will control eviction of unhealthy pods from the Istio load balancing pool.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: message-service
spec:
host: message-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
maxRequestsPerConnection: 10
http1MaxPendingRequests: 1024
outlierDetection:
consecutive5xxErrors: 7
interval: 5m
baseEjectionTime: 15m
EOF
Here we have a Circuit breaker implementation that tracks the status of each individual host in the upstream service. Applicable to both HTTP and TCP services. For HTTP services, hosts that continually return 5xx errors for API calls are ejected from the pool for a pre-defined period of time. For TCP services, connection timeouts or connection failures to a given host counts as an error when measuring the consecutive errors metric.
Our rule sets a connection pool size of 100 HTTP1 connections with no more than 10 req/connection to the Guestbook service. In addition, it sets a limit of 1024 concurrent HTTP1 requests and configures upstream hosts to be scanned every 5 mins so that any host that fails 7 consecutive times with a 502, 503, or 504 error code will be ejected for 15 minutes.
Let's test it. Find one of the pods and turn on misbehavior:
$ export MESSAGE_POD=$(kubectl get pods -l app="message-service" -o jsonpath='{.items[0].metadata.name}')
$ export MESSAGE_POD_IP=$(kubectl get pod ${MESSAGE_POD} -o jsonpath='{.status.podIP}')
$ kubectl run curl --image=curlimages/curl --restart=Never --command sleep infinity
$ kubectl exec curl -c curl -- curl http://${MESSAGE_POD_IP}:8080/misbehave
Next request to /hello/{name} will return a 503
Now let's note the Pod's name and follow the logs to see if the pod is actually misbehaving:
$ echo ${MESSAGE_POD}
$ kubectl logs ${MESSAGE_POD} -c message-service --follow
Go to the UI and refresh it a couple of times or post a few messages.
You should see the messages in console looking like this:
Version 1.0 processed message for Test
misbehaving!
If you compare the hostname of the pod that is misbehaving and the one sending a response in the UI, you should see they are different. Our Virtual Service tried to query the misbehaving pod and when it failed - evicted it from load balancing and routed the request to a working one!
Let's return our pod to normal configuration:
$ kubectl exec curl -c curl -- curl http://${MESSAGE_POD_IP}:8080/behave
$ kubectl delete pod curl
Before configuring the traffic splitting, build and push a new version to ECR using a modified revision
in image config.
$ cat <<EOF | kubectl apply -f -
apiVersion: kpack.io/v1alpha2
kind: Image
metadata:
name: ui-image-v2
spec:
tag: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPO_PREFIX}-ui:v2
serviceAccountName: kp-sa
builder:
name: builder
kind: Builder
source:
git:
url: https://gitlab.com/DmyMi/aws-k8s-lab.git
revision: v2
build:
env:
- name: "BP_GRADLE_BUILT_MODULE"
value: "ui"
EOF
Wait for 2-5 minutes before continuing. You can check progress by running:
$ kp build logs ui-image-v2
Next, create UI v2 deployment:
$ cd ~/aws-k8s-lab/kubernetes
$ cat <<EOF >> ui-deployment-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-v2
labels:
app: ui
version: "2.0"
spec:
replicas: 2
selector:
matchLabels:
app: ui
version: "2.0"
template:
metadata:
labels:
app: ui
version: "2.0"
spec:
serviceAccountName: ui-sa
containers:
- name: ui
image: ${ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${REPO_PREFIX}-ui:v2
env:
- name: MESSAGE_HOST
value: http://message-service
- name: GUESTBOOK_HOST
value: http://guestbook-service
- name: REDIS_HOST
value: redis
readinessProbe:
initialDelaySeconds: 40
httpGet:
path: /actuator/health
port: 9000
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9000
EOF
Install UI v2.
$ kubectl apply -f ui-deployment-v2.yaml
Wait for v2 pods to be up and running.
$ watch kubectl get pods -lapp=ui,version="2.0"
Visit the UI from the browser, and refresh a couple of times.
About 50% of the time you'll see v1 with white background, and the other 50% of the time you'll see v2 with a golden background.
Before you can control traffic splitting between these 2 versions, you first need to define destination subsets. Each subset can have a unique pod selector based on labels.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ui
spec:
host: ui
subsets:
- name: v1
labels:
version: "1.0"
- name: v2
labels:
version: "2.0"
EOF
Once you have the subsets defined, you can configure weight-based traffic split for the different subsets. First, shift all traffic to v1.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: ui
subset: v1
weight: 100
EOF
Go back to the browser and refresh several times. Confirm that all traffic is now going to v1.
You can update the weight as you need. Shift 80% of the traffic to v2.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: ui
subset: v1
weight: 20
- destination:
host: ui
subset: v2
weight: 80
EOF
Refresh the UI like 10 times :).
You can also use Virtual Service to direct traffic based on the request data from the header, URI, or HTTP method. Shift all traffic from Chrome browser to v2, and other browsers to v1.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
headers:
user-agent:
regex: ".*Chrome.*"
route:
- destination:
host: ui
subset: v2
- match:
- uri:
prefix: /
route:
- destination:
host: ui
subset: v1
EOF
Try loading the UI page from a Chrome browser vs another one (Edge, Firefox or Safari).
Clean up the Destination Rule & Deployment v2, and reset the Virtual Service:
$ kubectl delete deployment ui-v2
$ kubectl delete destinationrule ui
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: ui
EOF
Traffic mirroring, also called shadowing, is a powerful concept that allows feature teams to bring changes to production with as little risk as possible. Mirroring sends a copy of live traffic to a mirrored service. The mirrored traffic happens out of band of the critical request path for the primary service.
First, deploy a new version of the Message Service. It's going to be the same container, but with different configuration:
$ kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: message-service-v2
labels:
app: message-service
version: "2.0"
spec:
replicas: 1
selector:
matchLabels:
app: message-service
version: "2.0"
template:
metadata:
labels:
app: message-service
version: "2.0"
spec:
serviceAccountName: message-sa
containers:
- name: message-service
image: ${ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${REPO_PREFIX}-message:v1
env:
- name: DEMO_VERSION
value: "2.0"
resources:
requests:
cpu: 200m
memory: 128Mi
readinessProbe:
initialDelaySeconds: 40
httpGet:
path: /actuator/health
port: 9000
livenessProbe:
initialDelaySeconds: 40
httpGet:
port: 9000
path: /actuator/health
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9000
EOF
Next, create a destination rule for our two versions:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: message-service
spec:
host: message-service
subsets:
- name: v1
labels:
version: "1.0"
- name: v2
labels:
version: "2.0"
EOF
Finally, create the Virtual Service:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: message-service
spec:
hosts:
- message-service
http:
- match:
- uri:
prefix: /
route:
- destination:
host: message-service
subset: v1
weight: 100
mirror:
host: message-service
subset: v2
mirrorPercentage:
value: 100
EOF
This route rule sends 100% of the traffic to v1. The last part specifies that you want to mirror traffic to the version v2. When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with -shadow
. For example, message-service
becomes message-service-shadow
.
Also, it is important to note that these requests are mirrored as "fire and forget", which means that the responses are discarded.
You can use the mirrorPercentage
field to mirror a fraction of the traffic, instead of mirroring all requests. If this field is absent, for compatibility with older versions, all traffic will be mirrored.
Go to the UI and post a few messages. Then check the logs of the Message v2 to see if the requests are being mirrored:
$ kubectl logs `kubectl get pods -l app="message-service",version="2.0" -o jsonpath='{.items[0].metadata.name}'` -c message-service
You should see similar log messages showing that we have our requests mirrored:
Version 2.0 processed message for Test
But in UI we get a response from v1.
Egress traffic is allowed by default through the setting of Outbound Traffic Policy Mode to ALLOW_ANY
. You can change this mode to block all egress traffic by default and allow only specific traffic through.
Reconfigure Istio installation:
$ cd ~
$ cat <<EOF > ~/istio.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: demo
meshConfig:
outboundTrafficPolicy:
mode: REGISTRY_ONLY
defaultConfig:
tracing:
zipkin:
address: zipkin.istio-system:9411
EOF
Then run the following command to re-install Istio:
$ istioctl install -f istio.yaml
Wait 20-30 seconds and deploy a Shell pod into Kubernetes. Open a new Cloud Shell Terminal tab and run:
$ kubectl run -i --tty ubuntu --image=ubuntu -- bash
In the first tab, validate that the Pod has status of Running
, and the Ready column shows 2/2
.
$ watch kubectl get pods
NAME READY STATUS RESTARTS AGE
...
ubuntu 2/2 Running 0 66s
...
When you are ready, Control+C out of the watch loop.
In the second tab you should've access to the terminal in the pod. Try to update packages:
root@ubuntu:/# apt update
Notice that there are lots of connection errors! This is because all of the egress traffic is already intercepted by Istio and we configured it to block all egress traffic. You need to allow egress to *.ubuntu.com
destinations. There are 2 different ways to do this:
This lab will do the latter.
But first, let's see if it is actually Istio blocking, or just Ubuntu servers are down.
In the first tab run:
$ kubectl logs ubuntu -c istio-proxy --follow
Now, in the second tab (Ubuntu Shell) try running the update again.
root@ubuntu:/# apt update
Check the logs in the first tab, you should see something like:
... "GET /ubuntu/dists/focal-security/InRelease HTTP/1.1" 502 - ... "security.ubuntu.com" "-" - ... - block_all ... "GET /ubuntu/dists/focal/InRelease HTTP/1.1" 502 - ... "archive.ubuntu.com" "-" - ... - block_all ... "GET /ubuntu/dists/focal-updates/InRelease HTTP/1.1" 502 - ... "archive.ubuntu.com" "-" - ... - block_all ... "GET /ubuntu/dists/focal-backports/InRelease HTTP/1.1" 502 - ... "archive.ubuntu.com" "-" - ... - block_all
The block_all indicates that Istio's Envoy is blocking the request.
When you are ready, Control+C out of the watch loop.
In the first tab, configure a Service Entry to enable egress traffic to *.ubuntu.com
.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: ubuntu
spec:
hosts:
- "*.ubuntu.com"
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
EOF
In the second tab session that is currently connected to the Ubuntu pod, try to update packages again and install Curl.
root@ubuntu:/# apt update
root@ubuntu:/# apt install curl -y
Notice that this time, you are able to connect and install curl successfully.
However, whenever you try to connect to the outside that hasn't been whitelisted by Service Entry, you'll get an error.
root@ubuntu:/# curl https://google.com
Typically Istio will examine the destination hostname using the HTTP host header. However, it's impossible to examine the header of a HTTPS request. For HTTPS requests, Istio uses SNI to inspect the host name instead. Run the following command in the first tab:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- "google.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
EOF
Wait a few seconds and see that the traffic can now go through in the second tab.
root@ubuntu:/# curl https://google.com
Exit from the pod
root@ubuntu:/# exit
Delete it and close the second terminal tab.
$ kubectl delete pod ubuntu
We installed Grafana & Prometheus addon in our Operator config. To open the Grafana dashboard, run the following:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: exposed-monitoring-lb
namespace: istio-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
type: LoadBalancer
EOF
Wait for 2-5 minutes and run the following to get load balancer URL:
$ export LB_URL=`kubectl get svc exposed-monitoring-lb -n istio-system -o=jsonpath={.status.loadBalancer.ingress[0].hostname}`
$ echo http://${LB_URL}
Open the URL in your browser.
In the Grafana Dashboard, click Search.
In the "Search dashboard by name" field enter: Istio.
Then select the Istio Service Dashboard.
In Service, find and click ui.default.svc.cluster.local.
This will pull up the metrics specific to the UI service.
Make a few requests to the UI from your browser, and you should also see the Istio Dashboard update with the metrics (you may need to refresh Grafana).
The metrics are actually coming from Prometheus. Behind the scenes, Prometheus agent is scraping metrics from Envoy proxy, which has all the monitoring metrics such as latency, response status distributions, etc.
To open the Prometheus dashboard, run the following:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: exposed-monitoring-lb
namespace: istio-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9090
selector:
app: prometheus
component: server
release: prometheus
type: LoadBalancer
EOF
Wait for 30 seconds for load balancer to reconfigure and run the following to get load balancer URL:
$ export LB_URL=`kubectl get svc exposed-monitoring-lb -n istio-system -o=jsonpath={.status.loadBalancer.ingress[0].hostname}`
$ echo http://${LB_URL}
Open the URL in your browser. This will take you to the Prometheus console. Query a metric, e.g., envoy_cluster_internal_upstream_rq_completed{app="ui"}
Feel free to explore Prometheus before continuing.
To open the Kiali dashboard, run the following:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: exposed-monitoring-lb
namespace: istio-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 20001
selector:
app.kubernetes.io/instance: kiali
app.kubernetes.io/name: kiali
type: LoadBalancer
EOF
Wait for 30 seconds for load balancer to reconfigure and run the following to get load balancer URL:
$ export LB_URL=`kubectl get svc exposed-monitoring-lb -n istio-system -o=jsonpath={.status.loadBalancer.ingress[0].hostname}`
$ echo http://${LB_URL}
Open the URL in your browser.
Click Graph, and select the default Namespace.
This will bring up the service graph that shows service to service connections (you might need to post a new message in the UI).
Explore the Kiali console and move to the next step when you are ready.
To open the Zipkin dashboard, run the following:
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: exposed-monitoring-lb
namespace: istio-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 9411
selector:
app: zipkin
type: LoadBalancer
EOF
Wait for 30 seconds for load balancer to reconfigure and run the following to get load balancer URL:
$ export LB_URL=`kubectl get svc exposed-monitoring-lb -n istio-system -o=jsonpath={.status.loadBalancer.ingress[0].hostname}`
$ echo http://${LB_URL}
Open the URL in your browser. You might need to open a new window or use other browser.
In Discover click + and enter ServiceName→ istio-ingressgateway. And click the search button to the right. This will show you all the requests that came through the Istio Ingress.
Click into one of the traces to see more information.
When you are done exploring Zipkin, delete the service:
$ kubectl delete svc exposed-monitoring-lb -n istio-system
Congratulations! You went through the basics of Istio. Let's add Mutual TLS and relevant capabilities to your deployment.
First, reset the Virtual Service:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: guestbook-ui
spec:
hosts:
- "*"
gateways:
- guestbook-ui-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: ui
subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: ui
spec:
host: ui
subsets:
- name: v1
labels:
version: "1.0"
EOF
But wait, we already have mTLS enabled by default, but in permissive mode, where services are allowed to communicate with each other by plain text.
Now let's enforce it! Enable mTLS in the default namespace.
$ kubectl apply -n default -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
spec:
mtls:
mode: STRICT
EOF
Configure Destination Policy.
$ kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1beta1"
kind: "DestinationRule"
metadata:
name: "default"
namespace: "default"
spec:
host: "*.default.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF
Generally, health check requests to the liveness-http service are sent by Kubelet, which does not have an Istio issued certificate. Therefore when mutual TLS is enabled, the health check requests should fail.
Istio solves this problem by rewriting the application PodSpec
readiness/liveness probe, so that the probe request is sent to the sidecar agent. The sidecar agent then redirects the request to the application, strips the response body, only returning the response code.
This feature is enabled by default in all built-in Istio configuration profiles. Let's try disabling it for our UI v2 deployment to see what will happen.
Create UI v2 deployment:
$ cd ~/aws-k8s-lab/kubernetes
$ cat <<EOF >> ui-deployment-v2-rewrite.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-v2
labels:
app: ui
version: "2.0"
spec:
replicas: 1
selector:
matchLabels:
app: ui
version: "2.0"
template:
metadata:
labels:
app: ui
version: "2.0"
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
spec:
serviceAccountName: ui-sa
containers:
- name: ui
image: ${ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${REPO_PREFIX}-ui:v2
env:
- name: MESSAGE_HOST
value: http://message-service
- name: GUESTBOOK_HOST
value: http://guestbook-service
- name: REDIS_HOST
value: redis
readinessProbe:
initialDelaySeconds: 40
httpGet:
path: /
port: 8080
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 9000
EOF
Install UI v2.
$ kubectl apply -f ui-deployment-v2-rewrite.yaml
Wait for about 60-70 seconds for v2 pods to start. You'll notice that it is failing a health check:
$ watch kubectl get pods -lapp=ui,version="2.0"
NAME READY STATUS RESTARTS AGE
...
ui-v2-699c5b46b6-df9wg 1/2 Running 0 2m23s
If, for some reason, you don't want to enable health check rewrites, there are a number of solutions to this:
wget
, curl
, or your own executable, to check against the port. Delete the V2 deployment.
$ kubectl delete deployment ui-v2
Create a new Namespace that doesn't have Istio automatic sidecar injection.
$ kubectl create ns noistio
Run an Ubuntu Pod in the noistio namespace.
$ kubectl -n noistio run -i --tty ubuntu --image=ubuntu -- bash
Connect to Message Service from the pod that doesn't have Istio mTLS.
root@ubuntu:/# apt update && apt install curl -y
root@ubuntu:/# curl http://message-service.default.svc.cluster.local/hello/test
curl: (56) Recv failure: Connection reset by peer
Exit and delete this pod:
$ root@ubuntu:/# exit
$ kubectl -n noistio delete pod ubuntu
Execute curl from the Ubuntu pod in the default namespace that has Istio enabled.
$ kubectl -n default run -i --tty ubuntu --image=ubuntu -- bash
root@ubuntu:/# apt update && apt install curl -y
root@ubuntu:/# curl http://message-service.default.svc.cluster.local/hello/test
{"greeting":"Hello test from message-service-545c84c666-nqfx6 with 1.0","version":"1.0","hostname":"message-service-545c84c666-nqfx6"}
root@ubuntu:/# exit
It works in the default namespace because the Istio proxy is automatically configured with the certificates.
$ kubectl exec -ti ubuntu -c istio-proxy -- openssl s_client -showcerts -connect message-service.default:80
CONNECTED(00000005)
depth=1 O = cluster.local
verify error:num=19:self signed certificate in certificate chain
---
Certificate chain
0 s:
i:O = cluster.local
...
Let's examine these certificates!
$ mkdir certs
$ cd certs
$ kubectl exec -ti ubuntu -c istio-proxy -- openssl s_client -showcerts -connect message-service.default:80 > message-cert.txt
$ sed -n '/-----BEGIN CERTIFICATE-----/{:start /-----END CERTIFICATE-----/!{N;b start};/.*/p}' message-cert.txt > certs.pem
$ awk 'BEGIN {counter=0;} /BEGIN CERT/{counter++} { print > "proxy-cert-" counter ".pem"}' < certs.pem
Take a look at the certificate in more detail.
$ sudo yum install openssl -y
$ openssl x509 -in proxy-cert-1.pem -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
22:7c:a8:c9:24:f3:7c:9c:ea:a9:54:53:da:e1:e7:1f
Signature Algorithm: sha256WithRSAEncryption
Issuer: O = cluster.local
...
Subject:
Subject Public Key Info:
...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name: critical
URI:spiffe://cluster.local/ns/default/sa/message-sa
...
Notice the Subject Alternative Name. This name is tied to the service account associated with the pod (that we assigned in the deployment file).
Delete the Ubuntu pod.
$ kubectl -n default delete pod ubuntu
Using Istio, you can easily set up access control for workloads in your mesh. First, you should configure a simple deny-all policy that rejects all requests to the workload, and then grant more access to the workload gradually and incrementally.
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: default
spec:
{}
EOF
Point your browser at the UI. You should see an "RBAC: access denied" error, that shows that the configured deny-all policy is working as intended, and Istio doesn't have any rules that allow any access to workloads in the mesh.
Run the following command to create a ui-viewer policy to allow access with GET
and POST
methods to the UI service. The policy does not set the from
field in the rules
which means all sources are allowed, effectively allowing all users and services:
$ kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "ui-viewer"
namespace: default
spec:
selector:
matchLabels:
app: ui
rules:
- to:
- operation:
methods: ["GET", "POST"]
EOF
Wait 10-20 seconds and refresh the UI. You should see an "no healthy upstream" error, indicating that something happened to the pods. This is because the UI can't connect to Redis. Let's fix it before moving forward. Run the following command to create the redis-tcp-policy to allow the UI service, which issues requests using the cluster.local/ns/default/sa/ui-sa
service account, to access the Redis using TCP connection. We'll review what's going on next.
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: redis-tcp-policy
namespace: default
spec:
selector:
matchLabels:
app: redis
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/ui-sa"]
- to:
- operation:
ports: ["6379"]
EOF
Refresh your UI and see that the UI is working, but you get a "Guestbook Service is currently unavailable" message.
We have created service accounts for different services. For example, for UI we have following service account in ui-deployment.yaml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ui-sa
labels:
account: ui
And it is connected to the pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
...
spec:
...
template:
...
spec:
serviceAccountName: ui-sa
containers:
- name: ui
...
We can use it to authorize services in the mesh. First, run the following command to create the mysql-tcp-policy to allow the Guestbook service, which issues requests using the cluster.local/ns/default/sa/guestbook-sa
service account, to access the MySQL using TCP connection.
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: mysql-tcp-policy
namespace: default
spec:
selector:
matchLabels:
app: mysql
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/guestbook-sa"]
- to:
- operation:
ports: ["3306"]
EOF
Run the following command to create the guestbook-viewer policy to allow the UI service, which issues requests using the cluster.local/ns/default/sa/ui-sa
service account, to access the Guestbook service through GET
and POST
methods.
$ kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "guestbook-viewer"
namespace: default
spec:
selector:
matchLabels:
app: guestbook-service
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/ui-sa"]
to:
- operation:
methods: ["GET", "POST"]
EOF
Finally, run the following command to create the message-viewer policy to allow the UI service, which issues requests using the cluster.local/ns/default/sa/ui-sa
service account, to access the Message service through GET
methods.
$ kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "message-viewer"
namespace: default
spec:
selector:
matchLabels:
app: message-service
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/ui-sa"]
to:
- operation:
methods: ["GET"]
EOF
Go to your browser and check that now the application is working again.
Let's see how to set up an Istio authorization policy to enforce access based on a JSON Web Token (JWT). An Istio authorization policy supports both string-typed and list-of-string typed JWT claims.
First, we need to create a Request Authentication policy to tell Istio which JWT token issuer is supported:
$ kubectl apply -f - <<EOF
apiVersion: "security.istio.io/v1beta1"
kind: "RequestAuthentication"
metadata:
name: "ui-jwt-auth"
namespace: default
spec:
selector:
matchLabels:
app: ui
jwtRules:
- issuer: "testing@secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.17/security/tools/jwt/samples/jwks.json"
EOF
Let's try sending a request from a Curl pod with an invalid JWT.
$ kubectl run curl --image=curlimages/curl --restart=Never --command sleep infinity
$ kubectl exec -ti curl -c curl -- curl http://ui.default.svc.cluster.local -s -o /dev/null -H "Authorization: Bearer invalidToken" -w "%{http_code}\n"
401
Now we need to enforce the authentication. The following command creates the ui-viewer authorization policy for the UI service. The policy requires all requests to have a valid JWT with requestPrincipal
set to testing@secure.istio.io/testing@secure.istio.io
. Istio constructs the requestPrincipal
by combining the iss
and sub
of the JWT token with a /
separator.
$ kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: ui-viewer
namespace: default
spec:
selector:
matchLabels:
app: ui
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]
EOF
Next, get the JWT that sets the iss
and sub
keys to the same value, testing@secure.istio.io
. This causes Istio to generate the attribute requestPrincipal
with the value testing@secure.istio.io/testing@secure.istio.io
:
$ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.7/security/tools/jwt/samples/demo.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode -
{"exp":4685989700,"foo":"bar","iat":1532389700,"iss":"testing@secure.istio.io","sub":"testing@secure.istio.io"}
Verify that a request with a valid JWT is allowed:
$ kubectl exec -ti curl -c curl -- curl http://ui.default.svc.cluster.local -s -o /dev/null -H "Authorization: Bearer $TOKEN" -w "%{http_code}\n"
200
To delete the cluster execute the following command.
$ kops delete cluster --name $NAME --yes
When the cluster is removed - delete the bucket.
$ aws s3api delete-bucket \
--bucket ${STATE_BUCKET} \
--region ${REGION}
Delete the ECR repositories:
for name in "ui" "message" "guestbook" "kpack"
do
/usr/local/bin/aws ecr delete-repository \
--repository-name "${REPO_PREFIX}"-"${name}" \
--force
done
Thank you! :)