해당 기간을 확인해야 하는 이유는 인증서를 renew할경우 1.17이하 버젼에서는 나머지는 다 renew가 되지만

kublet.conf값이 renew되지 않는 버그가 존재하므로 항상 체크하는 습관을 가지는게 좋을거 같다

 

kubelet.conf certification 기간 확인

음.. 우선 2가지를 확인해야한다..
[root@minikube kubernetes]# pwd
/etc/kubernetes
[root@minikube kubernetes]# cat kubelet.conf

- name: system:node:minikube
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

위 부분이 코드로 된것도 있고 위처럼 파일 경로로된것도 있다. 아마 파일이면 minkube일거구 나머지는 암호화 코드일것이다

우선 암호화된 파일일 경우
echo -n "암호화된 내용" |base64 -d > test.txt
openssl x509 -in test.txt  -text -noout

인증일자를 확인하면 된다

 

 

 cd /etc/kubernetes

kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > kubelet.conf

systemctl restart kubelet

 

========== 참고 자료 =============

# On master - See https://kubernetes.io/docs/setup/certificates/#all-certificates

# Generate the new certificates - you may have to deal with AWS - see above re extra certificate SANs
sudo kubeadm alpha certs renew apiserver
sudo kubeadm alpha certs renew apiserver-etcd-client
sudo kubeadm alpha certs renew apiserver-kubelet-client
sudo kubeadm alpha certs renew front-proxy-client

# Generate new kube-configs with embedded certificates - Again you may need extra AWS specific content - see above
sudo kubeadm alpha kubeconfig user --org system:masters --client-name kubernetes-admin  > admin.conf
sudo kubeadm alpha kubeconfig user --client-name system:kube-controller-manager > controller-manager.conf
sudo kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > kubelet.conf
sudo kubeadm alpha kubeconfig user --client-name system:kube-scheduler > scheduler.conf

# chown and chmod so they match existing files
sudo chown root:root {admin,controller-manager,kubelet,scheduler}.conf
sudo chmod 600 {admin,controller-manager,kubelet,scheduler}.conf

# Move to replace existing kubeconfigs
sudo mv admin.conf /etc/kubernetes/
sudo mv controller-manager.conf /etc/kubernetes/
sudo mv kubelet.conf /etc/kubernetes/
sudo mv scheduler.conf /etc/kubernetes/

# Restart the master components
sudo kill -s SIGHUP $(pidof kube-apiserver)
sudo kill -s SIGHUP $(pidof kube-controller-manager)
sudo kill -s SIGHUP $(pidof kube-scheduler)

# Verify master component certificates - should all be 1 year in the future
# Cert from api-server
echo -n | openssl s_client -connect localhost:6443 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not
# Cert from controller manager
echo -n | openssl s_client -connect localhost:10257 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not
# Cert from scheduler
echo -n | openssl s_client -connect localhost:10259 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not

# Generate kubelet.conf
sudo kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > kubelet.conf
sudo chown root:root kubelet.conf
sudo chmod 600 kubelet.conf

# Drain
kubectl drain --ignore-daemonsets $(hostname)
# Stop kubelet
sudo systemctl stop kubelet
# Delete files
sudo rm /var/lib/kubelet/pki/*
# Copy file
sudo mv kubelet.conf /etc/kubernetes/
# Restart
sudo systemctl start kubelet
# Uncordon
kubectl uncordon $(hostname)

# Check kubelet
echo -n | openssl s_client -connect localhost:10250 2>&1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | openssl x509 -text -noout | grep Not

'나는 노동자 > KUBERNETES' 카테고리의 다른 글

인증서 기간 연장하기  (0) 2021.11.25
인증서 갱신 - 전통적인 방법  (0) 2021.11.24
계속 꺼지는 etcd 컨테이너 etcd 용량 줄이기  (0) 2021.11.21
minikube etcd 조각 모음 defrag  (0) 2021.11.21
metallb  (0) 2021.10.27

계속 꺼지는 etcd 컨테이너들

위 문제를 해결하기 위해서는 etcdctl 명령을 이용해서 etcd클러스터 구성원들의 과도한 키 스페이스 데이터들을 제거하고, 데이터베이스 조각모음을 수행해서 quota 범위 내로 크기를 되돌리는 과정이 필요합니다. 하지만, etcd 컨테이너들이 2~3분에 한번씩 죽어대는 바람에 제대로 작업을 진행하기가 불가능했습니다.

컨테이너가 계속 꺼지고 켜지기를 반복하는 이유는 컨테이너에 livenessProbe 설정이 세팅되어 있어서 etcd 컨테이너가 정상동작하지 않으면 healthcheck에 실패한 것으로 보고 컨테이너를 계속 재기동 하기 때문이었습니다. 우선 이 현상을 해결하기 위해서 etcd pod에 세팅되어 있는 livenessProbe 설정을 제거해 주기로 합니다. etcd는 kubernetes를 구성하는 핵심 구성요소 중 하나이기 때문에 /etc/kubernetes/manifests/ 디렉토리에 pod 구성정보가 존재합니다. 찾아서 수정해 줍니다.

 
# /etc/kubernetes/manifests/etcd.yaml
 
 
 
apiVersion: v1
 
kind: Pod
 
metadata:
 
creationTimestamp: null
 
labels:
 
component: etcd
 
tier: control-plane
 
name: etcd
 
namespace: kube-system
 
spec:
 
containers:
 
- command:
 
- etcd
 
- --advertise-client-urls=https://192.168.0.220:2379
 
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
 
- --client-cert-auth=true
 
- --data-dir=/var/lib/etcd
 
- --election-timeout=5000
 
- --heartbeat-interval=250
 
- --initial-advertise-peer-urls=https://192.168.0.220:2380
 
- --initial-cluster=k8s-master1=https://192.168.0.220:2380
 
- --key-file=/etc/kubernetes/pki/etcd/server.key
 
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.0.220:2379
 
- --listen-metrics-urls=http://127.0.0.1:2381
 
- --listen-peer-urls=https://192.168.0.220:2380
 
- --name=k8s-master1
 
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
 
- --peer-client-cert-auth=true
 
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
 
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
 
- --snapshot-count=10000
 
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
 
image: k8s.gcr.io/etcd:3.3.15-0
 
imagePullPolicy: IfNotPresent
 
# 컨테이너가 꺼지는 현상을 방지하기 위해 주석처리 해줍니다.
 
# livenessProbe:
 
# failureThreshold: 8
 
# httpGet:
 
# host: 127.0.0.1
 
# path: /health
 
# port: 2381
 
# scheme: HTTP
 
# initialDelaySeconds: 15
 
# timeoutSeconds: 15
 
name: etcd
 
resources: {}
 
volumeMounts:
 
- mountPath: /var/lib/etcd
 
name: etcd-data
 
- mountPath: /etc/kubernetes/pki/etcd
 
name: etcd-certs
 
hostNetwork: true
 
priorityClassName: system-cluster-critical
 
volumes:
 
- hostPath:
 
path: /etc/kubernetes/pki/etcd
 
type: DirectoryOrCreate
 
name: etcd-certs
 
- hostPath:
 
path: /var/lib/etcd
 
type: DirectoryOrCreate
 
name: etcd-data
 
status: {}

위와같은 주석처리를 모든 master 노드의 해당 경로에 존재한 yaml파일에 작업해주면, etcd 컨테이너가 죽지 않게 됩니다.

etcdctl 명령어

konvoy로 설치한 kubernetes에서 etcdctl 명령어를 사용하기 위해서 가장 정석적인 방법은 etcd 컨테이너 내에 접속해서 etcd 명령어를 사용하는 것이지만, 본인은 귀찮아서 그냥 컨테이너 밖에서 찾아서 사용해보기로 했습니다. (당연히 etcd컨테이너가 동작중인 master노드에서 해야합니다.)

 
#bash
 
find / -type f -name etcdctl 2>/dev/null
 
#출력예시
 
[root@k8s-master1 manifests]# find / -type f -name etcdctl 2>/dev/null
 
/run/containerd/io.containerd.runtime.v1.linux/k8s.io/4fc80ceb99dfc0dca39e726d95104f5e424c53e618fd71d201b9b8b9c75a6d5d/rootfs/usr/local/bin/etcdctl
 
/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/9/fs/usr/local/bin/etcdctl

둘중에 아무거나 선택해서 alias를 걸어서 사용하도록 합니다. alias를 걸어줄때 etcdctl 명령어로 클러스터와 통신할 때 사용하기 위한 인증서등을 함께 세팅해서 걸어줍니다.

 
#bash
 
 
 
alias etcdctl="\
 
ETCDCTL_API=3 \
 
/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/9/fs/usr/local/bin/etcdctl \
 
--cacert='/etc/kubernetes/pki/etcd/ca.crt' \
 
--cert='/etc/kubernetes/pki/etcd/server.crt' \
 
--key='/etc/kubernetes/pki/etcd/server.key' "

테스트

 
#bash
 
 
 
etcdctl member list

문제 해결

우선, 알람이 설정되어 있는 목록과 현재 클러스터 상태를 확인합니다.

 
#bash
 
 
 
etcdctl alarm list
 
etcdctl -w table endpoint status --cluster

etcd용량을 다이어트 해봅니다. 현재 상태를 제외한 나머지 오래된 revision들을 제거하기 위해 current revision 값을 가져옵니다.

 
#bash
 
c_revision=$(etcdctl endpoint status --write-out="json" | egrep -o '"revision":[0-9]*' | egrep -o '[0-9].*')
 
echo ${c_revision}

오래된 revision들을 날립니다.

 
#bash
 
etcdctl --endpoints=$(etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') compact $c_revision

조각모음을 합니다. 본인의 경우에는 이 작업에서 용량이 드라마틱하게 줄어들었습니다.

 
#bash
 
etcdctl --endpoints=$(etcdctl member list | cut -d, -f5 | sed -e 's/ //g' | paste -sd ',') defrag

클러스터 상태를 확인합니다.

 
#bash
 
etcdctl -w table endpoint status --cluster
 
#출력결과
 
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
 
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
 
+----------------------------+------------------+---------+---------+-----------+-----------+------------+
 
| https://192.168.0.221:2379 | 1806ccfb80e73faf | 3.3.15 | 7.8 MB | false | 602 | 66877835 |
 
| https://192.168.0.222:2379 | e7c82e12168d0897 | 3.3.15 | 7.8 MB | false | 602 | 66877835 |
 
| https://192.168.0.220:2379 | edabb0b65fe02a4c | 3.3.15 | 7.8 MB | true | 602 | 66877835 |
 
+----------------------------+------------------+---------+---------+-----------+-----------+------------+

경보를 해제하고 확인합니다.

 
#bash
 
etcdctl alarm disarm
 
etcdctl alarm list

'나는 노동자 > KUBERNETES' 카테고리의 다른 글

인증서 갱신 - 전통적인 방법  (0) 2021.11.24
kubelet.conf certification 기간 확인  (0) 2021.11.24
minikube etcd 조각 모음 defrag  (0) 2021.11.21
metallb  (0) 2021.10.27
etcd 설치 - 간략문서  (0) 2019.09.19


root@k8s-master1 manifests]# find / -type f -name etcdctl 2>/dev/null
/run/containerd/io.containerd.runtime.v1.linux/k8s.io/4fc80ceb99dfc0dca39e726d95104f5e424c53e618fd71d201b9b8b9c75a6d5d/rootfs/usr/local/bin/etcdctl
/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/9/fs/usr/local/bin/etcdctl

etcdctl을 /usr/bin/ 밑으로 복사


#!/bin/bash

alias etcdctl3='ETCDCTL_API=3 etcdctl --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/server.crt --key=/var/lib/minikube/certs/etcd/server.key'

root@minikube home]# etcdctl3 -w table endpoint status --cluster
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  1.6 MB |      true |      false |         4 |      46687 |              46687 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+


## 알람제거 ##
etcdctl3 --endpoints="https://${endpoint}:2379" alarm disarm
etcdctl3 --endpoints="https://${endpoint}:2379" alarm list

각 서버에서 실행하기 위해서는 
etcdctl3 alarm disarm
etcdctl3 alarm list


########### 압축하기 ####

etcdctl3 endpoint status --write-out="json" | egrep -o '"revision":[0-9]*'| egrep -o '[0-9].*'
"revision":41713

etcdctl3 compact 41713
#######################

## 조각 모음하기 ####
== 클러스터 전체 동시 ===
[root@minikube home]# etcdctl3 defrag --cluster    #클러스터 전체를 동시에
Finished defragmenting etcd member[https://192.168.45.100:2379]


[root@minikube home]# etcdctl3 -w table endpoint status --cluster
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  856 kB |      true |      false |         4 |      48305 |              48305 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

=== etcd서버 하나씩 조각 모음 : 서비스 영향확인 필요 ==


etcdctl3 -w table endpoint status --cluster 

리더를 확인한다. 리더를 제외한 나머지 부분에서 진행하며, 리더는 맨 마지막에 진행 


etcdctl3 defrag --endpoints="https://${endpoint}:2379"
etcdctl3 --endpoints="https://192.168.45.100:2379" --write-out=table endpoint status
해당서버에서 etcd pod가 정상적으로 올라오는지 확인한다
- 반복 --



# set a very small 16MB quota
$ etcd.yaml에  --quota-backend-bytes=$((16*1024*1024))  #계산된 값을 넣어준다


# fill keyspace
$ while [ 1 ]; do dd if=/dev/urandom bs=1024 count=1024  | etcdctl3 put key  || break; done
...
Error:  rpc error: code = 8 desc = etcdserver: mvcc: database space exceeded
# confirm quota space is exceeded

[root@minikube manifests]# etcdctl3 -w table endpoint status --cluster                  +-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX |             ERRORS             |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |   16 MB |      true |      false |         5 |      50310 |              50310 |  memberID:15398285247096893192 |
|                             |                  |         |         |           |            |           |            |                    |                 alarm:NOSPACE  |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------------------------------+
[root@minikube manifests]# etcdctl3 alarm list
memberID:15398285247096893192 alarm:NOSPACE

[root@minikube manifests]# etcdctl3 alarm disarm
memberID:15398285247096893192 alarm:NOSPACE

[root@minikube manifests]# etcdctl3 alarm list

## etcd.yaml에서 추가 ###

============ 압축하기 ============
 keep one hour of history
 --auto-compaction-retention=1


======용량 늘리기 ==================
예제: 8G  DEFAULT: 2G이며 MAX: 8G이다

--quota-backend-bytes=8589934592 


###################  job으로 조각모음 하기  :  etcd 2G기준   ########

[root@minikube manifests]#  etcdctl3 -w table endpoint status --cluster
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  408 MB |      true |      false |         2 |       1144 |               1144 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
kubectl label node minikube etcd="true"

[root@minikube home]#  etcdctl3 --endpoints="https://192.168.45.100:2379" --write-out=table endpoint status;
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  176 MB |      true |      false |         2 |       2819 |               2819 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

 kubectl apply -f minikube-etcd-defrag-job.yaml

minikube-etcd-defrag-job.yaml
0.00MB



[root@minikube home]# etcdctl --endpoints="https://192.168.45.100:2379" --write-out=table endpoint status;
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  2.0 MB |      true |      false |         2 |       3456 |               3456 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

## 스냅샷 파일 정보보기 ##

$ etcdctl snapshot save backup.db
$ etcdctl --write-out=table snapshot status backup.db
+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| fe01cf57 |       10 |          7 | 2.1 MB     |
+----------+----------+------------+------------+


## db 파일에 직업 조각 모음 하기
  - --data-dir=/var/lib/minikube/etcd


[root@minikube snap]# etcdctl3 --endpoints="https://192.168.45.100:2379" --write-out=table endpoint status;
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  5.2 MB |      true |      false |         2 |      23546 |              23546 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@minikube snap]# etcdctl3 defrag  /var/lib/minikube/etcd
Finished defragmenting etcd member[127.0.0.1:2379]
[root@minikube snap]# etcdctl3 --endpoints="https://192.168.45.100:2379" --write-out=table endpoint status;
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.45.100:2379 | d5b1b2d93f592f08 |   3.5.0 |  1.8 MB |      true |      false |         2 |      23564 |              23564 |        |
+-----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
[root@minikube snap]# ls -al
total 3792
drwx------ 2 root root     108 Nov 21 15:26 .
drwx------ 4 root root      29 Nov 21 08:29 ..
-rw-r--r-- 1 root root    7148 Nov 21 11:15 0000000000000002-0000000000002711.snap
-rw-r--r-- 1 root root    7148 Nov 21 14:20 0000000000000002-0000000000004e22.snap
-rw------- 1 root root 1843200 Nov 21 15:27 db
[root@minikube snap]#


##이건 뭘까 ? ##
 ./etcdctl3 check datascale --load="s" --auto-compact=true --auto-defrag=true

apiVersion: v1

kind: Namespace

metadata:

  name: metallb-system

  labels: app: metallb

 

metallb.yaml
0.01MB

 

 

[root@09506-minikube mnt]# docker ps |grep rancher
be91d46dd125   rancher/rancher-webhook                    "webhook"                8 minutes ago    Up 8 minutes                                                                               k8s_rancher-webhook_rancher-webhook-7f84b74ddb-qs6dt_cattle-system_e0fc1ee1-3f4a-4743-b28b-c8494286c9e7_0
7f0bdb28e5d7   k8s.gcr.io/pause:3.4.1                     "/pause"                 8 minutes ago    Up 8 minutes                                                                               k8s_POD_rancher-webhook-7f84b74ddb-qs6dt_cattle-system_e0fc1ee1-3f4a-4743-b28b-c8494286c9e7_0
7eceb0f7c466   rancher/gitjob                             "gitjob --tekton-ima…"   8 minutes ago    Up 8 minutes                                                                               k8s_gitjob_gitjob-5778966b7c-z5wzx_cattle-fleet-system_a7c81e7e-44ab-44cf-b4cb-f551768febe3_0
52af74d28f7f   rancher/fleet                              "fleetcontroller"        8 minutes ago    Up 8 minutes                                                                               k8s_fleet-controller_fleet-controller-974d9cc9f-vggbm_cattle-fleet-system_69a77bdd-7650-436a-b3a1-742831a0ba3c_0
b5639950b5df   08c9693b4357                               "entrypoint.sh --htt…"   9 minutes ago    Up 9 minutes                                                                               k8s_rancher_rancher-76cc8c9498-x22m6_cattle-system_60039463-1ea6-4028-9d3c-9342e5faac06_1
0735625e27f4   08c9693b4357                               "entrypoint.sh --htt…"   11 minutes ago   Up 11 minutes                                                                              k8s_rancher_rancher-76cc8c9498-f62bl_cattle-system_fb3ad37a-12e5-4ab3-b5d6-17390e18da61_0
8535f003f780   08c9693b4357                               "entrypoint.sh --htt…"   11 minutes ago   Up 11 minutes                                                                              k8s_rancher_rancher-76cc8c9498-zd5vl_cattle-system_1ba9d8a5-3c4f-491d-8641-7b895fa2b5bc_0
072602245d94   k8s.gcr.io/pause:3.4.1                     "/pause"                 11 minutes ago   Up 11 minutes                                                                              k8s_POD_rancher-76cc8c9498-zd5vl_cattle-system_1ba9d8a5-3c4f-491d-8641-7b895fa2b5bc_0
373297fa8fea   k8s.gcr.io/pause:3.4.1                     "/pause"                 11 minutes ago   Up 11 minutes                                                                              k8s_POD_rancher-76cc8c9498-f62bl_cattle-system_fb3ad37a-12e5-4ab3-b5d6-17390e18da61_0
123a5ca47ef9   k8s.gcr.io/pause:3.4.1                     "/pause"                 11 minutes ago   Up 11 minutes                                                                              k8s_POD_rancher-76cc8c9498-x22m6_cattle-system_60039463-1ea6-4028-9d3c-9342e5faac06_0
[root@09506-minikube mnt]# docker exec b5639950b5df reset-password
W0902 14:06:52.191966     238 client_config.go:615] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
New password for default admin user (user-z4t7r):
NlWz99HVRJgFUS1eypl4

'나는 노동자 > 이런저런 Tip' 카테고리의 다른 글

grafana plugin 수동설치  (0) 2023.03.03
grafana table column변경  (0) 2022.07.20
아이폰 벨소리 -대략 1분 정도  (0) 2020.09.29
gitlab file read 경로 문제  (0) 2020.08.19
cascade rc 삭제 pod 유지  (0) 2020.08.14
sudo yum update
sudo yum install yum-utils device-mapper-persistent-data lvm2

 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

 

sudo yum install docker-ce

sudo systemctl start docker

sudo systemctl enable docker

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube 

 

mv minikube /usr/bin

 

minikube start --vm-driver=none 

만약 아래와 같은 에러가 나오면

  Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.21.2 requires conntrack to be installed in root's path

 

yum install conntrack

 

minikube start --vm-driver=none 

 

vi ~/.bashrc

 

# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'
alias kubectl='minikube kubectl -- '
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

 

 

bash

 

kubectl get pods

 

 

minikube addons list

minikube addons enable metric-server

혹시 아래와 같은 에러가 발생하면

Exiting due to MK_ADDON_ENABLE: run callbacks: metric-server is not a valid addon

오타가 난것이니 잘 보고 다시금 해보실..

addons list에서 나오는 항목을 복사해서 사용하는것이 좋을것~~으로 생각

 

 

minikube addons enable ingress

minikube addons enable metalb  

 

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh

 

kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

 

참고 : docker 프리빌리지~~나올땐

sudo groupadd docker

sudo usermod -aG docker $USER

 

- Re-Login or Restart the Server

 

 

apiVersion: v1
kind: Namespace
metadata:
  name: code-server
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: code-server
  namespace: code-server
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: code-server.your_domain
    http:
      paths:
      - backend:
          serviceName: code-server
          servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
 name: code-server
 namespace: code-server
spec:
 ports:
 - port: 80
   targetPort: 8080
 selector:
   app: code-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: code-server
  name: code-server
  namespace: code-server
spec:
  selector:
    matchLabels:
      app: code-server
  replicas: 1
  template:
    metadata:
      labels:
        app: code-server
    spec:
      containers:
      - image: codercom/code-server:latest
        imagePullPolicy: Always
        name: code-server
        env:
        - name: PASSWORD
          value: "your_password"

'나는 노동자 > MSA' 카테고리의 다른 글

istio-service training  (0) 2019.04.01
redmine plugin check  (0) 2019.03.16

거의 한달
거의 매일 술이다

왜지 왜

나 스스로가 나를 이토록 힘들게 하는 이유가 뭘까
이대로 살다간 제명에 못 살거 같다


'일상다반사 > 이런저런 생각들' 카테고리의 다른 글

배우는 사람  (0) 2023.05.10
힘들때 읽자! 듣자!  (0) 2023.04.03
우물안 개구리  (0) 2021.07.14
0.1 %  (0) 2021.07.13
지식의기쁨 EBS 20190820  (0) 2019.08.22

요즘들어 매일 술이다
핑계를 찾자면 100 가지도 넘겠지
내몸도 살고자 술을 찾고 잊고자 술을 찾는거겠지
살고자.  
남의 돈 먹는게 어디 쉬운일이겠냐
그래도 그 속에서 우리끼리 피 터지게 싸우며 경쟁하는건 아닌거 같은데

이 우물밖을 나가면 영웅이 되고 세상의 일부분이 될거라는
생각.존경해.  
몰래 나가본 우물밖은 그리 행복하지 않아서 미리 겁을 먹고
난 오늘 결심한다
이 작은 우물안에 꽃씨부터 심어보기로 ...

'일상다반사 > 이런저런 생각들' 카테고리의 다른 글

힘들때 읽자! 듣자!  (0) 2023.04.03
금주  (0) 2021.07.17
0.1 %  (0) 2021.07.13
지식의기쁨 EBS 20190820  (0) 2019.08.22
즐거운 인생  (0) 2019.04.26




지금의 내가 싫고 어제의 나는 더더욱 싫다  변해보자던

다짐들을 모은다면 아마 63빌딩보다 더 높겠지

내려놓고 그냥 살던 대로 살자
억지로 변할려고 하지말자

다만 하루 0.1%만 변해보자

조금은 긍정적으로 조금은 어색한 미소지만
오늘부터 한번 작게나마 시작해보자

'일상다반사 > 이런저런 생각들' 카테고리의 다른 글

금주  (0) 2021.07.17
우물안 개구리  (0) 2021.07.14
지식의기쁨 EBS 20190820  (0) 2019.08.22
즐거운 인생  (0) 2019.04.26
지금 나에게 필요한 글귀  (0) 2019.04.22

+ Recent posts