【kubernetes101】Note1. How to access the application in k8s cluster?

Prerequisite

Create an independent namespace to avoid additional conflicits. The new namespace is defined in the file namespace-dev.yaml

{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "note1",
    "labels": {
      "name": "note1"
    }
  }
}

Apply the yaml file in your kubernetes cluster

$ kubectl apply -f namespace-dev.json 
namespace/note1 created

Verify the new namespace by kubectl,

$ kubectl get namespaces --show-labels
NAME          STATUS   AGE     LABELS
default       Active   2d13h   <none>
note1         Active   22s     name=note1

Basic Deployment on K8S

In this note, the discussion is based on a basic “django-tensorflow-inference” image, which exposes the container on port 8080.

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: tf-inference
spec:
  selector:
    matchLabels:
      app: tf-inference
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: tf-inference
    spec:
      containers:
      - name: tf-inference
        image: gcr.io/capstoneproject-216018/p12starter:latest
        ports:
        - containerPort: 8080

The sample application launches two pods in the replication set. After the pods are created, verify the k8s resources by kubectl

$ kubectl get all -n note1
NAME                                READY   STATUS    RESTARTS   AGE
pod/tf-inference-7c85d979b5-l8nct   1/1     Running   0          12s
pod/tf-inference-7c85d979b5-rlb46   1/1     Running   0          12s
NAME                           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tf-inference   2         2         2            2           12s
NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/tf-inference-7c85d979b5   2         2         2       12s

In the next step, we scheduled to expose the deployment in both internal network and external network, so that the user could have an access to the application in k8s cluster.

Port Forwarding

Port Forwarding forwards the local request to the container port. It is important to note that user could not get access to the local container from external network, instead, port forwarding only provides the service in 127.0.0.1 not 0.0.0.0.

$ kubectl port-forward deployment.apps/tf-inference 30100:8080 -n note1
Forwarding from 127.0.0.1:30100 -> 8080
Forwarding from [::1]:30100 -> 8080

Connections made to local port 30100 are forwarded to port 8080 of the pod that is running the tf-inference server. With this connection in place you can use your local workstation to debug the database that is running in the pod.

$ curl http://127.0.0.1:30100/predict?images=https://s3.amazonaws.com/glikson-public/DLL/data/inf5.tgz
{"inference": [["cat.pexels-photo-209037.jpeg", 0.4113174378871918], ["cat.pexels-photo-257532.jpeg", 0.632137656211853], ["cat.pexels-photo-259803.jpeg", 0.0010500261560082436], ["cat.pexels-photo-315582.jpeg", 0.0034616694319993258], ["cat.pexels-photo-326875.jpeg", 0.022260863333940506], ["dog.pexels-photo-247522.jpeg", 0.41857561469078064], ["dog.pexels-photo-257540.jpeg", 0.9962271451950073], ["dog.pexels-photo-356378.jpeg", 0.9811270833015442], ["dog.pexels-photo-374825.jpeg", 0.9923521876335144], ["dog.pexels-photo-374906.jpeg", 0.8952429294586182]]}

One additional note that the local machine could be any machine/client that could use the kubectl. The local machine is allowed to be other machines than the k8s-master and k8s-worker.

Node Port

Node port service creates a Kubernetes Service object that external clients can use to access an application running in a cluster. The Service provides load balancing for an application that has two running instances.



Leave a Reply

Your email address will not be published. Required fields are marked *