IBM Cloud のマネージド Kubernetes IKS のハンズオンを触ってみました。
- 下準備
- Lab1) Kubernetesクラスターへのアプリケーションデプロイ
- Lab2) アプリケーションのスケーリング と アップデート&ロールバック
- Lab 3) マニフェストファイルの使用とRedisデータベースコンテナとの連携
- Lab4) Helm チャートを使用したアプリケーションのデプロイ
- Lab5) コンテナアプリケーションとWatson APIとの連携
- Lab6) サンプル・チャートでHelmを理解する
https://github.com/ibm-cloud-labs/iks-handson
下準備
- ibmcloud, kubectl, helm(v3), git
- Kubernetestクラスタへの接続情報の取得
$ ibmcloud login -a https://cloud.ibm.com $ ibmcloud ks cluster config --cluster mycluster OK The configuration for mycluster was downloaded successfully. Added context for mycluster to the current kubeconfig file. You can now execute 'kubectl' commands against your cluster. For example, run 'kubectl get nodes'. $ kubectl get node NAME STATUS ROLES AGE VERSION 10.47.87.173 Ready <none> 26m v1.16.8+IKS
Lab1) Kubernetesクラスターへのアプリケーションデプロイ
guestbookデプロイ
$ kubectl run guestbook --image=ibmcom/guestbook:v1 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/guestbook created $ kubectl get po NAME READY STATUS RESTARTS AGE guestbook-59ff9b666c-czsrx 1/1 Running 0 58s $ kubectl get all NAME READY STATUS RESTARTS AGE pod/guestbook-59ff9b666c-czsrx 1/1 Running 0 94s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 41m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/guestbook 1/1 1 1 95s NAME DESIRED CURRENT READY AGE replicaset.apps/guestbook-59ff9b666c 1 1 1 95s
外部公開
$ kubectl expose deployment guestbook --type="NodePort" --port=3000 service/guestbook exposed $ kubectl get service guestbook NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook NodePort 172.21.135.114 <none> 3000:31970/TCP 2m37s $ ibmcloud ks workers --cluster mycluster OK ID Public IP Private IP Flavor State Status Zone Version kube-bq0p493d02qbahnio0d0-mycluster-default-00000043 173.193.107.194 10.47.87.173 free normal Ready hou02 1.16.8_1526
片付け
$ kubectl delete deploy guestbook deployment.apps "guestbook" deleted $ kubectl delete service guestbook service "guestbook" deleted $ kubectl get po No resources found in default namespace.
Lab2) アプリケーションのスケーリング と アップデート&ロールバック
Replica数指定によるスケーリング
$ kubectl scale --replicas=10 deployment guestbook deployment.apps/guestbook scaled $ kubectl rollout status deployment guestbook deployment "guestbook" successfully rolled out $ kubectl get po NAME READY STATUS RESTARTS AGE guestbook-59ff9b666c-49rjz 1/1 Running 0 2m49s guestbook-59ff9b666c-68br6 1/1 Running 0 6m54s guestbook-59ff9b666c-8s9hz 1/1 Running 0 2m49s guestbook-59ff9b666c-dlj84 1/1 Running 0 2m49s guestbook-59ff9b666c-h4jbp 1/1 Running 0 2m49s guestbook-59ff9b666c-nfdm4 1/1 Running 0 2m49s guestbook-59ff9b666c-r77vc 1/1 Running 0 2m49s guestbook-59ff9b666c-tps8g 1/1 Running 0 2m49s guestbook-59ff9b666c-wp47j 1/1 Running 0 2m49s guestbook-59ff9b666c-wth6x 1/1 Running 0 2m49s
kubectl scalse
でdeploymentの定義.spec.replicas
が増えていることを確認.spec.revisionHistoryLimit
は古いReplicaSetの履歴をいつまで持っておけるかの値
apiVersion: apps/v1 kind: Deployment metadata: generation: 3 labels: run: guestbook name: guestbook namespace: default (中略) spec: progressDeadlineSeconds: 600 replicas: 1
↓
apiVersion: apps/v1 kind: Deployment metadata: generation: 4 labels: run: guestbook name: guestbook namespace: default (中略) spec: progressDeadlineSeconds: 600 replicas: 10 revisionHistoryLimit: 10
アップデートとロールバック
v2へのアップデート
$ kubectl set image deployment/guestbook guestbook=ibmcom/guestbook:v2 deployment.apps/guestbook image updated
v1へのロールバック
$ kubectl rollout undo deployment guestbook deployment.apps/guestbook rolled back $ kubectl get rs -l run=guestbook NAME DESIRED CURRENT READY AGE guestbook-55c86ccd98 0 0 0 27m guestbook-59ff9b666c 10 10 10 48m
片付け
$ kubectl delete deploy guestbook deployment.apps "guestbook" deleted $ kubectl delete service guestbook service "guestbook" deleted $ kubectl get po No resources found in default namespace.
Lab 3) マニフェストファイルの使用とRedisデータベースコンテナとの連携
- 宣言的にアプリケーションをデプロイする方法(マニフェストファイルの使用)
- 複数コンテナ連携方法(Redisコンテナ追加)
事前準備
$ git clone https://github.com/cloud-handson/guestbook.git
マニフェストによるデプロイとスケーリング
アプリケーションのデプロイ
$ kubectl apply -f guestbook-deployment.yaml deployment.apps/guestbook-v1 created $ kubectl get po -l app=guestbook NAME READY STATUS RESTARTS AGE guestbook-v1-98dd9c654-5fnl6 1/1 Running 0 66s guestbook-v1-98dd9c654-9jhvk 1/1 Running 0 66s guestbook-v1-98dd9c654-cj26b 1/1 Running 0 66s
- ラベルの指定は
.spec.template.metadata.labels
apiVersion: apps/v1 kind: Deployment metadata: name: guestbook-v1 labels: app: guestbook version: "1.0" spec: replicas: 3 selector: matchLabels: app: guestbook template: metadata: labels: app: guestbook version: "1.0" spec: containers: - name: guestbook image: ibmcom/guestbook:v1 ports: - name: http-server containerPort: 3000
アプリケーションのスケーリング
guestbook-deployment.yaml
の.spec.replicas
を3→5
spec: replicas: 5 selector: matchLabels: app: guestbook
$ kubectl apply -f guestbook-deployment.yaml deployment.apps/guestbook-v1 configured $ kubectl get po -l app=guestbook NAME READY STATUS RESTARTS AGE guestbook-v1-98dd9c654-5fnl6 1/1 Running 0 4m24s guestbook-v1-98dd9c654-9jhvk 1/1 Running 0 4m24s guestbook-v1-98dd9c654-cj26b 1/1 Running 0 4m24s guestbook-v1-98dd9c654-shw5v 1/1 Running 0 3s guestbook-v1-98dd9c654-wkrnd 1/1 Running 0 3s
- マニフェストファイルが多くなってしまう複雑さや手間の問題についての解決策として、Helm, Kustomize, Kssonetなどがある
Serviceのデプロイ
$ kubectl apply -f guestbook-service.yaml service/guestbook created $ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE guestbook NodePort 172.21.246.84 <none> 3000:30090/TCP 11s
apiVersion: v1 kind: Service metadata: name: guestbook labels: app: guestbook spec: ports: - port: 3000 targetPort: http-server selector: app: guestbook type: NodePort
- クラスター上の3000番ポートからのルートをアプリケーションの
http-server
ポートに指定する
Redisコンテナ追加
Redis Masterのデプロイ
$ kubectl apply -f redis-master-deployment.yaml deployment.apps/redis-master created $ kubectl get po -l app=redis,role=master NAME READY STATUS RESTARTS AGE redis-master-68857cd57c-6bb2l 1/1 Running 0 34s
apiVersion: apps/v1 kind: Deployment metadata: name: redis-master labels: app: redis role: master spec: replicas: 1 selector: matchLabels: app: redis role: master template: metadata: labels: app: redis role: master spec: containers: - name: redis-master image: redis:2.8.23 ports: - name: redis-server containerPort: 6379
- redis動作確認
$ kubectl exec -it redis-master-68857cd57c-6bb2l redis-cli 127.0.0.1:6379> ping PONG 127.0.0.1:6379> exit
guestbookアプリケーションからRedis Master接続様にService公開
$ kubectl apply -f redis-master-service.yaml service/redis-master created
apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master spec: ports: - port: 6379 targetPort: redis-server selector: app: redis role: master
- クラスター上の6379番ポートからのルートをアプリケーションの
redis-server
ポートに指定 - deploymentの再起動
$ kubectl delete deploy guestbook-v1 deployment.apps "guestbook-v1" deleted $ kubectl apply -f guestbook-deployment.yaml deployment.apps/guestbook-v1 created
Redisによってデータ永続化されていることの確認
- Chromeの通常ウィンドウとシークレットウィンドウで双方のデータが反映されることを確認
redis-slaveを追加し、読み/書きの役割を複数のRedisで使い分ける
$ kubectl apply -f redis-slave-deployment.yaml deployment.apps/redis-slave created $ kubectl get po -l app=redis,role=slave NAME READY STATUS RESTARTS AGE redis-slave-595fb9b54c-59br4 1/1 Running 0 34s redis-slave-595fb9b54c-96nsb 1/1 Running 0 34s
apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave labels: app: redis role: slave spec: replicas: 2 selector: matchLabels: app: redis role: slave template: metadata: labels: app: redis role: slave spec: containers: - name: redis-slave image: ibmcom/guestbook-redis-slave:v2 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns ports: - name: redis-server containerPort: 6379
- redis-slaveの動作確認
$ kubectl exec -it redis-slave-595fb9b54c-59br4 redis-cli 127.0.0.1:6379> keys * 1) "guestbook" 127.0.0.1:6379> lrange guestbook 0 10 1) "test" 2) "test2" 3) "test3" 127.0.0.1:6379> info replication # Replication role:slave master_host:redis-master master_port:6379 master_link_status:up master_last_io_seconds_ago:7 master_sync_in_progress:0 slave_repl_offset:295 slave_priority:100 slave_read_only:1 connected_slaves:0 master_repl_offset:0 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> config get slave-read-only 1) "slave-read-only" 2) "yes"
- resdis-slaveの外部公開
$ kubectl apply -f redis-slave-service.yaml service/redis-slave created. $ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-master ClusterIP 172.21.9.231 <none> 6379/TCP 40m redis-slave ClusterIP 172.21.96.71 <none> 6379/TCP 10m ・・・
apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave spec: ports: - port: 6379 targetPort: redis-server selector: app: redis role: slave
後片付け
$ kubectl delete -f guestbook-deployment.yaml deployment.apps "guestbook-v1" deleted $ kubectl delete -f guestbook-service.yaml service "guestbook" deleted $ kubectl delete -f redis-slave-service.yaml service "redis-slave" deleted $ kubectl delete -f redis-slave-deployment.yaml deployment.apps "redis-slave" deleted $ kubectl delete -f redis-master-service.yaml service "redis-master" deleted $ kubectl delete -f redis-master-deployment.yaml deployment.apps "redis-master" deleted
Lab4) Helm チャートを使用したアプリケーションのデプロイ
- Helmチャートを使用してJpetStoreをデプロイ
$ git clone https://github.com/ibm-cloud-labs/jpetstore-kubernetes-compact.git --depth 1 $ cd /jpetstore-kubernetes-compact/helm $ tree . . ├── mmssearch │ ├── Chart.yaml │ ├── README.md │ ├── ics-values.yaml │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── service.yaml │ ├── values-icp.yaml │ └── values.yaml └── modernpets ├── Chart.yaml ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ ├── deployment.yaml │ └── service.yaml ├── values-icp.yaml └── values.yaml $ helm install jpetstore ./modernpets/ NAME: jpetstore LAST DEPLOYED: Mon Mar 30 18:46:34 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get Cluster Public IP Adress: $ ibmcloud ks workers mycluster $ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jpetstore default 1 2020-03-30 18:46:34.361794 +0900 JST deployed modernpets-0.1.5 1.0 $ kubectl get po NAME READY STATUS RESTARTS AGE jpetstore-modernpets-jpetstoredb-6dbcc7d87c-ctgr6 1/1 Running 0 70s jpetstore-modernpets-jpetstoreweb-75fdd595f9-5lmtp 1/1 Running 0 70s jpetstore-modernpets-jpetstoreweb-75fdd595f9-c4ccr 1/1 Running 0 70s $ kubectl get all NAME READY STATUS RESTARTS AGE pod/jpetstore-modernpets-jpetstoredb-6dbcc7d87c-ctgr6 1/1 Running 0 105s pod/jpetstore-modernpets-jpetstoreweb-75fdd595f9-5lmtp 1/1 Running 0 105s pod/jpetstore-modernpets-jpetstoreweb-75fdd595f9-c4ccr 1/1 Running 0 105s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/db ClusterIP 172.21.115.116 <none> 3306/TCP 105s service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 3h9m service/web NodePort 172.21.240.21 <none> 80:31000/TCP 105s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/jpetstore-modernpets-jpetstoredb 1/1 1 1 105s deployment.apps/jpetstore-modernpets-jpetstoreweb 2/2 2 2 105s NAME DESIRED CURRENT READY AGE replicaset.apps/jpetstore-modernpets-jpetstoredb-6dbcc7d87c 1 1 1 105s replicaset.apps/jpetstore-modernpets-jpetstoreweb-75fdd595f9 2 2 2 105s
Lab5) コンテナアプリケーションとWatson APIとの連携
Visual Recognitionサービスの作成
APIキーをSecretとして作成
mms-secrets.json.template
をコピーしてmms-secrets.json
作成- Visual RecognitionサービスのAPIキーを代入
$ cd jpetstore-kubernetes-compact/mmssearch $ cp mms-secrets.json.template mms-secrets.json $ kubectl create secret generic mms-secret --from-file=mms-secrets=./mms-secrets.json
- jsonファイルの中身がBase64エンコードされてsecretとして生成される
- 外部サービス用のAPI KeyやユーザID/パスワードなど、Gitリポジトリ上に置くことが推奨されないものはsecretとして分離する
$ kubectl get secret mms-secret -o yaml
apiVersion: v1 data: mms-secrets: ewogICJ3YXRzb24iOgogIHsKICAgICJ1cmwiOiAiaHR0cHM6Ly9nYXRld2F5LndhdHNvbnBsYXRmb3JtLm5ldC92aXN1YWwtcmVjb2duaXRpb24vYXBpIiwKICAgICJub3RlIjogIkl0IG1heSB0YWtlIHVwIHRvIDUgbWludXRlcyBmb3IgdGhpcyBrZXkgdG8gYmVjb21lIGFjdGl2ZSIsCiAgICAiYXBpX2tleSI6ICJ5cGtQNDhScUhfci1NMTdPRm4xV0p5bEtzVXZnNnc1RFFwOXBlMy1fMWRfSSIgCiAgfQp9Cg== kind: Secret
MMSSearchのデプロイ
$ cd jpetstore-kubernetes-compact/helm $ helm install mmssearch ./mmssearch/ NAME: mmssearch LAST DEPLOYED: Tue Mar 31 00:09:24 2020 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get Cluster Public IP Adress: $ ibmcloud ks workers mycluster $ kubectl get po -l app=mmssearch-mmssearch NAME READY STATUS RESTARTS AGE mmssearch-mmssearch-6766d58c9b-r658t 1/1 Running 0 57s $ kubectl get all NAME READY STATUS RESTARTS AGE pod/jpetstore-modernpets-jpetstoredb-6dbcc7d87c-ctgr6 1/1 Running 0 5h27m pod/jpetstore-modernpets-jpetstoreweb-75fdd595f9-5lmtp 1/1 Running 0 5h27m pod/jpetstore-modernpets-jpetstoreweb-75fdd595f9-c4ccr 1/1 Running 0 5h27m pod/mmssearch-mmssearch-6766d58c9b-r658t 1/1 Running 0 4m19s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/db ClusterIP 172.21.115.116 <none> 3306/TCP 5h27m service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 8h service/mmssearch NodePort 172.21.243.32 <none> 8080:32000/TCP 4m20s service/web NodePort 172.21.240.21 <none> 80:31000/TCP 5h27m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/jpetstore-modernpets-jpetstoredb 1/1 1 1 5h27m deployment.apps/jpetstore-modernpets-jpetstoreweb 2/2 2 2 5h27m deployment.apps/mmssearch-mmssearch 1/1 1 1 4m20s NAME DESIRED CURRENT READY AGE replicaset.apps/jpetstore-modernpets-jpetstoredb-6dbcc7d87c 1 1 1 5h27m replicaset.apps/jpetstore-modernpets-jpetstoreweb-75fdd595f9 2 2 2 5h27m replicaset.apps/mmssearch-mmssearch-6766d58c9b 1 1 1 4m20s
- ネットで拾った猫の画像を認識することを確認
secretの読み出し方法
- 環境変数として参照
- Volumeとしてマウント→今回
(中略) spec: volumeMounts: - name: service-secrets mountPath: "/etc/secrets" readOnly: true volumes: - name: service-secrets secret: secretName: mms-secret items: - key: mms-secrets path: mms-secrets.json
後片付け
$ helm uninstall jpetstore release "jpetstore" uninstalled $ helm uninstall mmssearch release "mmssearch" uninstalled
Lab6) サンプル・チャートでHelmを理解する
- helm v2.14以降のv2が対象(Lab5まではv3だったのに、、)
チャート作成
$ helm-old-20200330 create mychart Creating mychart $ tree mychart/ mychart/ ├── Chart.yaml # チャートの情報を含むyaml ├── charts # このチャートが依存するチャートを格納するディレクトリー ├── templates # マニフェストのテンプレートを格納するディレクトリー │ ├── NOTES.txt # OPTIONAL: チャートの使用方法を記載したプレーンテキスト │ ├── _helpers.tpl # │ ├── deployment.yaml # deployment作成用のyaml │ ├── ingress.yaml # Ingress設定用のyaml │ ├── service.yaml # サービス作成用のyaml │ ├── serviceaccount.yaml # ハンズオンの説明にはない。version差異かしら。 │ └── tests │ └── test-connection.yaml └── values.yaml # このチャートのデフォルト値を記載したyaml 3 directories, 9 files
deployment.yaml
- {{.Values.<変数名>}}はvalues.yamlにあるデフォルト値が埋め込まれる
- 例えば
.Values.replicaCount
はvalues.yaml
のreplicaCount: 1
が入る
- 例えば
$ cat mychart/templates/deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "mychart.fullname" . }} labels: {{ include "mychart.labels" . | indent 4 }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: {{ include "mychart.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: {{ include "mychart.name" . }} app.kubernetes.io/instance: {{ .Release.Name }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ template "mychart.serviceAccountName" . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 80 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {{- toYaml .Values.resources | nindent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }}
$ cat mychart/values.yaml
# Default values for mychart. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: nginx tag: stable pullPolicy: IfNotPresent imagePullSecrets: [] nameOverride: "" fullnameOverride: "" serviceAccount: # Specifies whether a service account should be created create: true # The name of the service account to use. # If not set and create is true, a name is generated using the fullname template name: podSecurityContext: {} # fsGroup: 2000 securityContext: {} # capabilities: # drop: # - ALL # readOnlyRootFilesystem: true # runAsNonRoot: true # runAsUser: 1000 service: type: ClusterIP port: 80 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local paths: [] tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi nodeSelector: {} tolerations: [] affinity: {}
サンプルチャート
$ helm install sample ./mychart NAME: sample LAST DEPLOYED: Tue Mar 31 00:42:18 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mychart,app.kubernetes.io/instance=sample" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80 $ helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION sample default 1 2020-03-31 00:42:18.2451642 +0900 JST deployed mychart-0.1.0 1.0 $ kubectl get po NAME READY STATUS RESTARTS AGE sample-mychart-847b4f49df-lv54l 1/1 Running 0 46s $ kubectl get all NAME READY STATUS RESTARTS AGE pod/sample-mychart-847b4f49df-lv54l 1/1 Running 0 6m24s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 9h service/sample-mychart ClusterIP 172.21.247.90 <none> 80/TCP 6m24s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/sample-mychart 1/1 1 1 6m24s NAME DESIRED CURRENT READY AGE replicaset.apps/sample-mychart-847b4f49df 1 1 1 6m25s
- ポートフォワード。
http://localhost:8080
でアクセス可能。
$ kubectl port-forward sample-mychart-847b4f49df-lv54l 8080:80 Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80
設定変更
$ vi mychart/templates/service.yaml
apiVersion: v1 kind: Service metadata: name: {{ include "mychart.fullname" . }} labels: {{ include "mychart.labels" . | indent 4 }} spec: type: {{ .Values.service.type }} ports: - port: {{ .Values.service.port }} targetPort: http protocol: TCP name: http {{- if .Values.service.nodePort }} # 追加行 nodePort: {{ .Values.service.nodePort }} # 追加行 {{- end}} # 追加行 selector: app.kubernetes.io/name: {{ include "mychart.name" . }} app.kubernetes.io/instance: {{ .Release.Name }}
- テンプレートの記載チェック
$ helm lint ./mychart/ ==> Linting ./mychart/ [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed
- vlue.yamlの更新
$ cp -p mychart/values.yaml mychart/value-new.yaml $ vi mychart/value-new.yaml
service: type:NodePort port:80 nodePort: 30001
- helm更新
$ helm upgrade -f mychart/value-new.yaml sample ./mychart/ Release "sample" has been upgraded. Happy Helming! NAME: sample LAST DEPLOYED: Sat Apr 4 19:57:25 2020 NAMESPACE: default STATUS: deployed REVISION: 2 NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services sample-mychart) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
リソースを追加する
- valueファイルにapp.nameを追加
$ vi mychart/value-new.yaml
app: name: IKS-san
- templates/index-configmap.yamlを作成
apiVersion: v1 kind: ConfigMap metadata: name: index-config data: index-config: index.html index.html: | <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to __NAME__</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
- templatess/deployment.yamlの編集
(中略) spec: (中略) template: spec: (中略) volumes: - name: index-config configMap: name: index-config - name: config-volume emptyDir: {} initContainers: - name: init-myservice image: busybox command: ['sh', '-c', 'cat /etc/config-template/index.html | sed "s/__NAME__/{{ .Values.app.name }}/" > /etc/config/index.html'] volumeMounts: - name: config-volume mountPath: /etc/config - name: index-config mountPath: /etc/config-template/index.html readOnly: true subPath: index.html containers: - name: {{ .Chart.Name }} securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: "{{ .Values.image.repository }}:{{ .Chart.AppVersion }}" imagePullPolicy: {{ .Values.image.pullPolicy }} volumeMounts: - name: config-volume mountPath: /usr/share/nginx/html/ ports: - name: http containerPort: 80 protocol: TCP livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: {{- toYaml .Values.resources | nindent 12 }} (中略)
- helm更新
$ helm upgrade -f mychart/value-new.yaml sample ./mychart/ Release "sample" has been upgraded. Happy Helming! NAME: sample LAST DEPLOYED: Sun Apr 5 02:39:58 2020 NAMESPACE: default STATUS: deployed REVISION: 4 NOTES: 1. Get the application URL by running these commands: export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services sample-mychart) export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}") echo http://$NODE_IP:$NODE_PORT
Helmの中身の理解が追いつかなかったので、その勉強をしようと思う。