ssh-keygen -f remote-key


vi Dockerfile

FROM centos:201910

RUN yum -y install openssh-server
RUN useradd remote_user && \
echo “remote_user:1234”| chpasswd && \
##centos 7일 경우
## echo “1234”| paasw remote_user —stdin && \

mkdir /home/renote_user/.ssh && \
chmod 700 /home/remote_user/.ssh

COPY reomote-key.pub /home/remote_user/.ssh/authorized_keys

RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized_keys

RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D

'나는 노동자 > DOCKER' 카테고리의 다른 글

docker tls 접속  (0) 2021.03.08
컨테이너 파일 사이즈  (0) 2019.01.16
Runtime directory and storage driver  (0) 2018.07.11
cgroup-driver 변경하기  (0) 2018.07.11
systemctl start docker error시  (1) 2018.03.23

kubectl run —restart=Never —image=busybox static-busybox —dry-run -o yaml —command sleep 1000

'나는 노동자 > EXAM' 카테고리의 다른 글

Static Pods  (0) 2019.10.25
labels and selectors  (1) 2019.10.25
Taint and Tolerations  (0) 2019.10.24

Without kube-apiserver

Worker node의 /etc/kubernetes/manifests 에 pod.yaml(이름 상관없어요 그냥 yaml file)을 만들어 주면 된다

manifests가 없다면 만들어 주자

kubelet.service의 내용을 보면(systemctl status kubelet)
—pod-manifest-path=/etc/kuberbets/manifests
—config=kubeconfig.yaml <== 해당 경로를 넣어줌

kubeconfig.yaml파일의 내용
staticPodPath: /etc/kubernetes/manifests

docker ps 로 컨테이너가 만들어졌는지 확인
kubectl get pods <= 만들어졌는지 확인

삭제가 되지 않으며 ,해당 노드에 가서 manifests 안의 파일을 삭제해야한다

보틍은 master node 의 /etc/kubernetes/manifests 안에
controller-manager.yaml , apiservcer.yam, etcd,yaml
이 들어있어서 항상 동작하도록(삭제되지 않고) 하는데 사용됨

staticpods 는 created by kubelet
Daemonsets은 created by kube-apiserver(daemonset controller)

​​static pod가 만약 node01에서 실행되고 있고 이를 삭제하고 자 할때

kubectl get nodes -o wide
ssh node01

cat /var/lib/kubelet/config

manifest path를 찾기위해 위의 cat 명령어를 실행

해당 파일을 보면
staticPodPath: ~
가 나온다 해당 경로에 가서 삭제해주면 된다




'나는 노동자 > EXAM' 카테고리의 다른 글

kubectl run  (0) 2019.10.25
labels and selectors  (1) 2019.10.25
Taint and Tolerations  (0) 2019.10.24

We have deployed a number of pods they are labelled with tier ,env, and bu how many pods exits in the dev environment

Use selectors to filter the output

kubectl get pods —selector env=dev

How many pods are in the fiance busiess unit(bu)?

kubectl get pods —selector bu=finance

How many objects are in the ‘prod’ environment including PODs, ReplicasSets and anby other objects?

kubectl get all —selector env=prod


Identify the pod which is ‘prod’ , part of ‘finance’ BU and is a ‘frontend’ tier?

kubectl get all —selector env=prod,bu=finance,tier=frontend

'나는 노동자 > EXAM' 카테고리의 다른 글

kubectl run  (0) 2019.10.25
Static Pods  (0) 2019.10.25
Taint and Tolerations  (0) 2019.10.24

NoSchedule 의 경우 tolerations이 없는 경우 해당 노드에 할당되지 못하며

 toleration effect가 NoExecute인 경우는 taint가 적용되지 않은

노드에 할당된다

 

taint확인

 $ kubectl describe node node01 |grep -i taint

taint 만들기

key: spray    value: mortein    effect: NoSchedle

 

master $ kubectl taint node node01 spray=mortein:NoSchedule
node/node01 tainted

 

taint 제거

master $ kubectl taint node node01 spray-
node/node01 untainted

 

master $ kubectl describe node master |grep -i taint
Taints:             node-role.kubernetes.io/master:NoSchedule

 

taint해제

kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-

'나는 노동자 > EXAM' 카테고리의 다른 글

kubectl run  (0) 2019.10.25
Static Pods  (0) 2019.10.25
labels and selectors  (1) 2019.10.25

etcd 설치 - 간략문서

별도의 서버로 구성

 

Kubernetes-CKA-0900-Install-v1.4.pdf
2.55MB

'나는 노동자 > KUBERNETES' 카테고리의 다른 글

minikube etcd 조각 모음 defrag  (0) 2021.11.21
metallb  (0) 2021.10.27
Article on Setting up Basic Authentication  (0) 2019.05.27
Backup and Restore  (0) 2019.05.23
cluster upgrade process  (0) 2019.05.22

ICBM - 대륙간 탄도 미사일
iot could    big data mobile

톺아보다 - 샅샅이 더듬어 뒤지면서 찾아보다

지식의기쁨  EBS 20190820

물감의 3원색을 썩으면 검은색

빛의 3원색을 썩으면 흰색

통합이 아니라 융복합

빨간색 파란색 풍선을 썩으면 터져버린다

흡수융복합 - 정신의 변화 낙타 사자 어린이
니체
낙타 - 지식을 등에 업고 가는 사람 - 부정적의미
사자 - 그 지식에 대해 비판적인 인식으로 포효
어린아이 - 무한한 호기심으로 세상에 묻고 그 순진 무궁함으로 수많은 새로운 시도 긍적으로새로운 가치를 만들어 가는 단계
어린아이를 중요한 단계로 여기는 니체

논리적-> 감성적 사고로 변화
타인의 공감을 이끌어 리더하는 시대

미래역량 4Cs
넓게 배우되 필요한 부분은 깊게 파고 든다
T자형 인간 파이형 인간
자신의 의견을 개진할수 있는 교육이 필요

격물치지

상상(한자) - 코끼리에서 유래
(중국)
죽은 코끼리의 뼈를 보고 코끼리를 추측
코끼리를 추측해보다

고전을 배워야 고전하지 않는다
손오공 - 머리에 띠 - 웨어러블 디바이스
근두운 - 인터넷
손오공 위치 추척 - 클라우드 
논어 15편 - 확인

많이 배우고 알고 있는것이 중요한게 아니라 하나를 통해서도 나머지를 통찰할수 있는 능력이 키워라
배를 만드는 사람에겐  바다를 먼저 보여줘라 -생택쥐페리
익숙한것에서 새로운것을 발견할수있는 자세가 필요
질문에는 체계성과 논리성이 필요

'일상다반사 > 이런저런 생각들' 카테고리의 다른 글

우물안 개구리  (0) 2021.07.14
0.1 %  (0) 2021.07.13
즐거운 인생  (0) 2019.04.26
지금 나에게 필요한 글귀  (0) 2019.04.22
드려움에 대해  (0) 2018.10.23

template:
...

volumeMounts:
- mountPath: /etc/localtime
name: timezone-config
volumes:
- hostPath:
path: /usr/share/zoneifo/Asis/Seoul
name: timezone-config

Article on Setting up Basic Authentication
Setup basic authentication on kubernetes
Note: This is not recommended in a production environment. This is only for learning purposes.
Follow the below instructions to configure basic authentication in a kubeadm setup.

Create a file with user details locally at /tmp/users/user-details.csv

# User File Contents
password123,user1,u0001
password123,user2,u0002
password123,user3,u0003
password123,user4,u0004
password123,user5,u0005


Edit the kube-apiserver static pod configured by kubeadm to pass in the user details. The file is located at /etc/kubernetes/manifests/kube-apiserver.yaml



apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
<content-hidden>
image: k8s.gcr.io/kube-apiserver-amd64:v1.11.3
name: kube-apiserver
volumeMounts:
- mountPath: /tmp/users
name: usr-details
readOnly: true
volumes:
- hostPath:
path: /tmp/users
type: DirectoryOrCreate
name: usr-details


Modify the kube-apiserver startup options to include the basic-auth file



apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
<content-hidden>
- --basic-auth-file=/tmp/users/user-details.csv
Create the necessary roles and role bindings for these users:



---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]

---
# This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: user1 # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
Once created, you may authenticate into the kube-api server using the users credentials

curl -v -k https://localhost:6443/api/v1/pods -u "user1:password123"

'나는 노동자 > KUBERNETES' 카테고리의 다른 글

metallb  (0) 2021.10.27
etcd 설치 - 간략문서  (0) 2019.09.19
Backup and Restore  (0) 2019.05.23
cluster upgrade process  (0) 2019.05.22
OS Upgrade drain cordon uncordon  (0) 2019.05.22

Resource configuration backup
kunectl get all —all-namespaces -o yaml > all-deploy-service.yaml

etcd backup n restore
복구시
token n data 경로는 원래 것과 다르게 해야한다

# 1. Get etcdctl utility if it's not already present.

Reference: https://github.com/etcd-io/etcd/releases

```
ETCD_VER=v3.3.13

# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
DOWNLOAD_URL=${GOOGLE_URL}

rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test

curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz

/tmp/etcd-download-test/etcd --version
ETCDCTL_API=3 /tmp/etcd-download-test/etcdctl version

mv /tmp/etcd-download-test/etcdctl /usr/bin
```

# 2. Backup
minikube의 경우 /etc/kubernetes/mainifests/kube-apiserver.yaml 참조

```
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/snapshot-pre-boot.db
```

# -----------------------------
# Disaster Happens
# -----------------------------

# 3. Restore ETCD Snapshot to a new folder

```
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
snapshot restore /tmp/snapshot-pre-boot.db
```

# 4. Modify /etc/kubernetes/manifests/etcd.yaml

Update ETCD POD to use the new data directory and cluster token by modifying the pod definition file at `/etc/kubernetes/manifests/etcd.yaml`. When this file is updated, the ETCD pod is automatically re-created as thisis a static pod placed under the `/etc/kubernetes/manifests` directory.

Update --data-dir to use new target location

```
--data-dir=/var/lib/etcd-from-backup
```

Update new initial-cluster-token to specify new cluster

```
--initial-cluster-token=etcd-cluster-1
```

Update volumes and volume mounts to point to new path

```
volumeMounts:
- mountPath: /var/lib/etcd-from-backup
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd-from-backup
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
```

> Note: You don't really need to update data directory and volumeMounts.mountPath path above. You could simply just update the hostPath.path in the volumes section to point to the new directory. But if you are not working with a kubeadm deployed cluster, then you might have to update the data directory. That's why I left it as is.

만약 pod로 구성되지 않았다면 snap save 후
ETCDCTL_API=3 etcdctl snapshot status snapshot.db


service kube-apiserver stop

모든 수정작업이 완료된후에는

systemctl daemon-reload
servicec etcd restart
service kube-apiserver start

'나는 노동자 > KUBERNETES' 카테고리의 다른 글

etcd 설치 - 간략문서  (0) 2019.09.19
Article on Setting up Basic Authentication  (0) 2019.05.27
cluster upgrade process  (0) 2019.05.22
OS Upgrade drain cordon uncordon  (0) 2019.05.22
configmap,secret in pod  (0) 2019.05.21

+ Recent posts