Samples¶
To really get running using the LiteSpeed Ingress Controller, you will need to configure it to access your applications running within the cluster. Most applications are managed with Helm charts and the LiteSpeed Ingress Controller is no exception. However, scripts are used here to allow you to see each step in the process and to manually create the process if you so choose to avoid the complexities of Helm.
These sets of samples install a simple backend with all of the plumbing you need to access it. It consists of sample scripts and YAML files are distributed in a single compressed TAR file in the LiteSpeed Helm distribution directory. They require that you have access to your Kubernetes system with kubectl
and that the LiteSpeed Ingress Controller is up and running.
olsup.sh
andolsdown.sh
are a simple set of scripts which bring up a containerized OpenLiteSpeed as a backend, accessible through the load balancer.examplesup.sh
andexamplesdown.sh
are a modified version of the samples provided with the NGINX load balancer, using the containerized echoheaders backend.
Getting the Samples¶
The easiest way to get the samples is to download the TAR file and extract it directly.
- To download the file, visit our GitHub Helm samples directory and then download the file
ls-k8s-webadc-VERSION.tgz
(for example:ls-k8s-webadc-0.1.18.tgz
). Copy that file to a directory you'd like to extract it to. Or, of you're usingwget
the command is:wget https://github.com/litespeedtech/helm-chart/raw/main/helm-chart-sources/ls-k8s-webadc/samples/ls-k8s-webadc-0.1.18.tgz
for the version 0.1.18 file. - To extract the file, in a terminal,
cd
to the directory you wish to extract it to and untar it. For example:tar xf ls-k8s-webadc-0.1.18.tgz
. This will create als-k8s-webadc
directory beneath it with some simple sample scripts and sample.yaml
files.
Running the samples¶
All of the samples must be run in the environment that you can run kubectl
in. You can verify this by verifying that you can see all of the existing resources:
$ kubectl get all -A
This should show you a bunch of resources.
All of the samples require that you use the namespace sandbox
. If this namespace does not already exist, you should create it:
$ kubectl create namespace sandbox
If you choose to use a different namespace you will need to modify the specified NAMESPACE
name in olsenv.sh
.
olsup.sh and olsdown.sh¶
This sample installs a preconfigured cloud based version of OpenLiteSpeed with limited TLS support into your cluster and can uninstall it completely. This lets you see how each step is performed. The olsup.sh
script looks like this:
#!/bin/bash
DIR=$( dirname "$0" )
. "$DIR/olsenv.sh"
echo "Create the OLS sample"
echo "Create the TLS secret. In production you would use previously created certificate and key files."
openssl genrsa -out "${HOSTNAME}.key" 2048
openssl req -new -key "${HOSTNAME}.key" -out "${HOSTNAME}.csr" -subj "/CN=${HOSTNAME}"
openssl x509 -req -days 3650 -in "${HOSTNAME}.csr" -signkey "${HOSTNAME}.key" -out "${HOSTNAME}.crt"
echo "Make Kubernetes aware of your certificate"
kubectl create secret tls $HOSTNAME --cert "${HOSTNAME}.crt" --key "${HOSTNAME}.key" -n $NAMESPACE
echo "Bring up the OLS Ingress environment"
kubectl create -f examples/ols-backend.yaml -n $NAMESPACE
kubectl create -f examples/ols-backend-svc.yaml -n $NAMESPACE
kubectl create -f examples/ols-backend-ingress.yaml -n $NAMESPACE
echo "Get the IP address to access it"
kubectl get ing ols-ingress -n $NAMESPACE
#curl https://IPADDRESS/ -H 'Host: ols-ingress.com' -k
olsenv.sh¶
olsenv.sh
is used as a source variables file so you can define NAMESPACE
and HOSTNAME
in a common place, without having to change the script.
Your application will need to run in a specified name space. The LiteSpeed Ingress Controller is setup in Helm to run in the ls-k8s-webadc
namespace. It is a good practice to have all of the components of your backend in the same namespace. In our example we chose sandbox
. Note that every kubectl
command includes a -n $NAMESPACE
parameter to have all of the components running in the same namespace.
Creating the certificate and key¶
In a real world application, you will purchase a certificate and key from a well-known provider and use that in the next step. However, to provide a working sample openssl
is used to generate a key file (ols-ingress.com.key
) and a self-signed certificate (ols-ingress.com.crt
) which is used in the next step.
Creating the TLS secret for your certificate and key files¶
kubectl create secret tls $HOSTNAME --cert "${HOSTNAME}.crt" --key "${HOSTNAME}.key" -n $NAMESPACE
secret
. In the sample script the two files created above are directly pointed at to create the secret named ols-ingress.com
in the sandbox
namespace. Creating the Deployment¶
.yaml
files are used to specify specific operations and detailed parameters. The examples/ols-backend.yaml
file creates the ols-backend
deployment which is used in subsequent steps:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ols-backend
spec:
selector:
matchLabels:
app: ols-backend
replicas: 2
template:
metadata:
labels:
app: ols-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: ols-backend
image: litespeedtech/ols-backend
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 80
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 80
- containerPort: 443
metadata.name: ols-backend
(metadata
followed byname
indented) is the name used by services or other Kubernetes resources to access the pods as a group.replicas: 2
The number of instances (pods) of this image that will be created, each on a separate node. If the number of available nodes is less than 2, it will start the pod on a single node. You may want to use more replicas for extra scalability/redundancy but there is additional cost by the cloud provider.image: litespeedtech/ols-backend
This is the image on the container repository that is pulled to create the pod.ports.containerPort: 80
andports.containerPort: 443
The ports that the pod can access from outside.
Creating the Service¶
The examples/backend-svc.yaml
file creates the service which can then be exposed by an ingress. Quite a small file:
apiVersion: v1
kind: Service
metadata:
name: ols-backend-svc
spec:
selector:
app: ols-backend
ports:
- port: 80
targetPort: 80
name: ols-backend-http
- port: 443
targetPort: 443
name: ols-backend-https
Some interesting parameters are:
metadata.name: ols-backend-svc
The name used by ingresses or other Kubernetes resources to access the service.selector.app: ols-backend
The name of the existing Deployment which needs to be already running.ports.port: 80
The external port of the pod. Usually matches containerPort above. Parameters with a leading dash are repeated parameters andports.port: 443
is indeed repeated to indicate an additional set of ports.ports.targetPort: 80
The port available to the application inside the pod. Againports.targetPort: 443
is a repeated instance of that parameter to indicate a separate port.
Creating the Ingress¶
The Ingress is the object that is exposed to the LiteSpeed Ingress Controller. The examples/ols-backend-ingress.yaml
file creates the ingress which is exposed to the LiteSpeed Ingress Controller for access to the wider internet:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ols-ingress
annotations:
kubernetes.io/ingress.class: litespeedtech.com/lslbd
spec:
#ingressClassName: ls-k8s-webadc
tls:
- secretName: ols-ingress.com
rules:
- host: ols-ingress.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: ols-backend-svc
port:
number: 80
Some interesting parameters are:
metadata.name: ols-ingress
The name used by the LiteSpeed Ingress Controller or other Kubernetes resources to access the ingress.annotations.kubernetes.io/ingress.class: litespeedtech.com/lslbd
Lets the LiteSpeed Ingress Controller know that this is an ingress it is to process. This allows you to have multiple load balancers in the same cluster. Note that you can also usespec.ingressClassName: ls-k8s-webadc
as an alternative method.spec.ingressClassName: ls-k8s-webadc
This is commented out in the example above, but is an alternative method to specify the load balancer to process the request if you do not specify the annotation.tls.secretName: ols-ingress.com
The name of the TLS secret created above.rules.host: ols-ingress.com
The name of the domain to export.path: /
The path exported by the ingress. A combination of therules.host
andpath
are required to match for the request to be routed.pathType: ImplementationSpecific
The LiteSpeed Ingress Controller treats allpathType
specifications as ImplementationSpecific at this time and are routed as if they were of type Prefix with a trailing slash (whether specified or not).service.name: ols-backend-svc
Must match the name of themetadata.name
of the Service definition which must already exist.port.number: 80
Must match a port number of the service. Note that even though the ingress does not define a port 443, the LiteSpeed Ingress Controller will still service HTTPS requests because a secret is defined.
Testing it¶
The last two lines in the olsup.sh
are intended to help you test the load balancer's access to the backend.
kubectl get ing ols-ingress -n $NAMESPACE
will resolve tokubectl get ingress ols-ingress -n sandbox
in the example and will show the exported IP address generated by Kubernetes. This may take a few manual interations as Kubernetes may need several minutes to get an address assigned.
In our environment we ran:
$ kubectl get ing ols-ingress -n sandbox
NAME CLASS HOSTS ADDRESS PORTS AGE
ols-ingress <none> ols-ingress.com 192.0.2.0 80, 443 21m
#curl https://IPADDRESS/ -H 'Host: ols-ingress.com' -k
In our environment we rancurl https://192.0.2.0/ -H 'Host: ols-ingress.com' -k
which displays the output of the default OpenLiteSpeed banner screen. You specify the-H 'Host: ols-ingress.com'
to indicate that it should transmit theHost
header entry as we are not able to immediately use the domain name. You specify-k
to support a self-signed cert.
To run using the proper domain name, see your Cloud Provider's documentation as to the steps to add a DNS name using the address assigned by Kubernetes.
Taking it down with olsdown.sh¶
The olsdown.sh
script looks like this:
#!/bin/bash
DIR=$( dirname "$0" )
. "$DIR/olsenv.sh"
echo "Delete the OLS Sample"
kubectl delete ingress ols-ingress -n $NAMESPACE
kubectl delete services ols-backend-svc -n $NAMESPACE
kubectl delete deployment ols-backend -n $NAMESPACE
kubectl delete secret $HOSTNAME -n $NAMESPACE
rm "${HOSTNAME}.crt"
rm "${HOSTNAME}.csr"
rm "${HOSTNAME}.key"
The steps are simpler here:
olsenv.sh
Again you need to source the same variables file used to create the objects.kubectl delete ingress ols-ingress -n $NAMESPACE
Deletes the ingress.kubectl delete services ols-backend-svc -n $NAMESPACE
Deletes the service.kubectl delete deployment ols-backend -n $NAMESPACE
Deletes the deployment.kubectl delete secret $HOSTNAME -n $NAMESPACE
Deletes the secret.rm $HOSTNAME.crt
,rm $HOSTNAME.csr
andrm $HOSTNAME.key
delete the certificate and key files created to complete the cleanup.
With olsdown.sh
, some of the objects may be in Terminating
state for a short while, but they will go away quickly.
examplesup.sh and examplesdown.sh¶
This sample is a modified version of the sample that comes with the NGINX load balancer, tailored for the LiteSpeed Ingress Controller. Each step and interesting parameters will be documented here.
This script has a number of differences from the OLS scripts above:
- It uses a number of
kubectl
commands with command line parameters rather than.yaml
files. This may help make the process a bit easier to understand and shows a different method of creating backends. - This demonstrates the Ingress method of a simple fanout, where a single domain can use separate paths to route to separate backends
The examplesup.sh
script looks like this:
#!/bin/bash
NAMESPACE="sandbox"
kubectl create deployment echoheadersx --image=k8s.gcr.io/echoserver:1.10 -n $NAMESPACE
kubectl create deployment echoheadersy --image=k8s.gcr.io/echoserver:1.10 -n $NAMESPACE
kubectl expose deployment echoheadersx --port=80 --target-port=8080 --name=echoheaders-x -n $NAMESPACE
kubectl expose deployment echoheadersy --port=80 --target-port=8080 --name=echoheaders-y -n $NAMESPACE
kubectl create -f examples/ingress.yaml -n $NAMESPACE
kubectl get ing echomap -n $NAMESPACE
Create Deployment¶
The first line of the script sets up the namespace as in the OLS scripts above. The next two lines create the deployment without using .yaml
files - all with command line parameters.
kubectl create deployment echoheadersx --image=k8s.gcr.io/echoserver:1.10 -n $NAMESPACE
kubectl create deployment echoheadersy --image=k8s.gcr.io/echoserver:1.10 -n $NAMESPACE
These two lines create two separate deployments taking the place of examples/ols-backend.yaml
.:
- The deployments are named
echoheadersx
andechoheadersy
respectively replaying themetadata.name
parameters in the.yaml
file. - They use the container image
k8s.gcr.io/echoserver:1.10
taking the place of the image parameter.
Expose Deployment¶
Exposing the deployment does some of the task done in the creation of the service in examples/backend-svc.yaml
.
kubectl expose deployment echoheadersx --port=80 --target-port=8080 --name=echoheaders-x -n $NAMESPACE
kubectl expose deployment echoheadersy --port=80 --target-port=8080 --name=echoheaders-y -n $NAMESPACE
name=echoheaders-x
andname=echoheaders-y
respectively represent themetadata.name
in all.yaml
files which name the service.- The created deployment names
echoheadersx
andechoheadersy
are used asselector.app
is used inexamples/backend-svc.yaml
. port=80
(external port of the pod) andtarget-port=8080
(internal port of the pod) represent theport
andtarget-port
as in theports
specification inexamples/backend-svc.yaml
.
Creating the Ingress¶
Creating the ingress makes the service and pods available to the load balancer. examples/ingress.yaml
:
# An Ingress with 2 hosts and 3 or 4 endpoints
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echomap
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-x
port:
number: 80
- path: /bar
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-x
port:
number: 80
- path: /com
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-x
port:
number: 80
- host: bar.baz.com
http:
paths:
- path: /bar
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-y
port:
number: 80
- path: /bar2
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-y
port:
number: 80
- path: /foo/bar
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-x
port:
number: 80
- path: /fooby
pathType: ImplementationSpecific
backend:
service:
name: echoheaders-x
port:
number: 80
It is longer than examples/ols-backend-ingress.yaml
because it demonstrates two concepts:
- Creation of completly separate domains:
foo.bar.com
andbar.baz.com
which use the same serviceechoheaders-x
. - Note that all of the paths do not include the root. Thus if something is specified, like
foo.bar.com/not-defined
the load balancer itself will return a404 Not Found
error - Simple fanout. In the
bar.baz.com
domain separate paths point to separate backend services. bar.baz.com/bar/
andbar.baz.com/bar2/
both point to serviceechoheaders-y
bar.baz.com/foo/bar/
andbar.baz.com/fooby/
both point to serviceechoheaders-x
Testing the example¶
This example does not create a certificate or create an accessible TLS secret so it must be run with http.
In our environment we ran:
$ kubectl get ing echomap -n sandbox
NAME CLASS HOSTS ADDRESS PORTS AGE
echomap <none> foo.bar.com,bar.baz.com 143.244.212.14 80 17h
#curl http://IPADDRESS/foo/abc -H 'Host: foo.bar.com'
In our environment we rancurl http://143.244.212.14/foo/abc -H 'Host: foo.bar.com'
which displays the output of the echoheaders screen. You specify the-H 'Host: foo.bar.com'
to indicate that it should transmit theHost
header entry as we are not able to immediately use the domain name. Note that the echoheaders output includes theHostname: echoheadersx
(with some trailing characters) indicating that it's using an echoheadersx pod. I add theabc
suffix as the specified path is always assumed to be a directory and the contents are what is to be displayed.#curl http://IPADDRESS/abc -H 'Host: foo.bar.com'
We ran:curl http://143.244.212.14/abc -H 'Host: foo.bar.com'
which returns a404 Not Found
error as there is no definition for the root.#curl http://IPADDRESS/bar/abc -H 'Host: bar.baz.com'
We ran:curl http://143.244.212.14/bar/abc -H 'Host: bar.baz.com'
which displays theHostname: echoheadersy
.#curl http://IPADDRESS/foo/bar/abc -H 'Host: bar.baz.com'
We ran:curl http://143.244.212.14/foo/bar/abc -H 'Host: foo.bar.com'
which displays theHostname: echoheadersx
.
To run using the proper domain name, see your Cloud Provider's documentation as to the steps to add a DNS name using the address assigned by Kubernetes.
Taking it down with examplesdown.sh¶
Run examplesdown.sh to take down the example:
#!/bin/bash
NAMESPACE="sandbox"
kubectl delete ingress echomap -n $NAMESPACE
kubectl delete services echoheaders-x -n $NAMESPACE
kubectl delete services echoheaders-y -n $NAMESPACE
kubectl delete deployment echoheadersx -n $NAMESPACE
kubectl delete deployment echoheadersy -n $NAMESPACE
This basically reverses the steps used in bringing up the example:
- Delete the ingress with:
kubectl delete ingress echomap -n $NAMESPACE
- Delete the two service definitions with
kubectl delete services echoheaders-x -n $NAMESPACE
andkubectl delete services echoheaders-y -n $NAMESPACE
- Delete the two deployments with
kubectl delete deployment echoheadersx -n $NAMESPACE
andkubectl delete deployment echoheadersy -n $NAMESPACE
Running LiteSpeed Ingress Controller without Helm¶
Two scripts are provided to bring the LiteSpeed Ingress Controller up and down without Helm. lsup.sh
brings up the controller, lsdown.sh
takes it down.
lsup.sh¶
The load balancer is configured to run in the namespace ls-k8s-webadc
. If this namespace does not already exist, you should create it:
$ kubectl create namespace ls-k8s-webadc
The lsup.sh
script looks like this:
#!/bin/bash
echo "Bring up the LiteSpeed Ingress controller without using Helm"
kubectl create -f examples/default/service-account.yaml -n ls-k8s-webadc
kubectl create -f examples/default/rc-default.yaml -n ls-k8s-webadc
kubectl expose deployment ls-k8s-webadc --type=LoadBalancer --name=ls-k8s-webadc -n ls-k8s-webadc
It creates the ServiceAccount, the ClusterRole and the ClusterRoleBinding, which are all necessary for a load balancer using service-account.yaml
. Other than namespace, if you wish to use another one, this .yaml
file should not be modified.
apiVersion: v1
kind: ServiceAccount
metadata:
name: ls-k8s-webadc
namespace: ls-k8s-webadc
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ls-k8s-webadc
rules:
- apiGroups: [""]
resources: ["nodes", "pods", "endpoints", "configmaps", "secrets", "services"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "ingressclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ls-k8s-webadc
subjects:
- kind: ServiceAccount
name: ls-k8s-webadc
namespace: ls-k8s-webadc
roleRef:
kind: ClusterRole
name: ls-k8s-webadc
apiGroup: rbac.authorization.k8s.io
This creates the Deployment with rc-default.yaml
. In this .yaml
file you can change a number of options including:
- The exposed ports particularly with
containerPort
specified incontainers.ports
. Note that port 7090 is required as it is used in communications between the Kubernetes environment and the LiteSpeed load balancer itself and should not be changed. - The parameters to the LiteSpeed Ingress Controller in
containers.args
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ls-k8s-webadc
namespace: ls-k8s-webadc
labels:
k8s-app: ls-k8s-webadc
spec:
replicas: 1
selector:
matchLabels:
k8s-app: ls-k8s-webadc
template:
metadata:
labels:
k8s-app: ls-k8s-webadc
name: ls-k8s-webadc
spec:
serviceAccountName: ls-k8s-webadc
terminationGracePeriodSeconds: 60
hostNetwork: true
containers:
- image: litespeedtech/ls-k8-staging
name: ls-k8s-webadc
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
# when changing this port, also specify it using --healthz-port in ls-k8s-webadc args.
port: 11972
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 30
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
- containerPort: 7090
hostPort: 7090
args:
- /ls-k8s-up.sh
- --healthz-port=11972
- --allow-internal-ip=true
- --lslb-wait-timeout=1200
- --lslb-enable-ocsp-stapling=true
The last line exposes the load balancer and activates it.
lsdown.sh¶
The lsdown.sh
script deletes the resources created in lsup.sh which results in taking down the controller.
#!/bin/bash
kubectl delete services ls-k8s-webadc -n ls-k8s-webadc
kubectl delete serviceaccount ls-k8s-webadc -n ls-k8s-webadc
kubectl delete clusterrole ls-k8s-webadc -n ls-k8s-webadc
kubectl delete clusterrolebinding ls-k8s-webadc -n ls-k8s-webadc
kubectl delete deployments ls-k8s-webadc -n ls-k8s-webadc