get pods에서 문제가 있었거나 완료된 pod삭제

kubectl get jobs

kubectl delete job job이름

'나는 노동자 > 이런저런 Tip' 카테고리의 다른 글

jenkins docker.sock permission denied  (0) 2019.12.06
minikibe dashboard error  (0) 2019.12.04
git fatal: unable to access  (0) 2019.04.18
kubectl alias k  (0) 2018.10.02
gitlab - error: src refspec master does not match any  (0) 2018.04.10

https://supergiant.io/blog/introduction-to-istio-service-mesh-for-kubernetes/

 

Introduction to Istio Service Mesh for Kubernetes

In this blog post, we describe the architecture of Istio Service Mesh and how to deploy it to your Kubernetes cluster

supergiant.io

필요이미지 

docker.io/istio/proxyv2:1.0.5
docker.io/istio/galley:1.0.5
docker.io/istio/pilot:1.0.5
docker.io/istio/mixer:1.0.5
docker.io/istio/sidecar_injector:1.0.5
docker.io/jaegertracing/all-in-one:1.5
docker.io/prom/prometheus:v2.3.1
docker.io/istio/servicegraph:1.0.5
docker.io/istio/citadel:1/0.5
grafana/grafana:5.2.3
quio/coreos/hyperkube:v1.7.8_coreos.0

What Is a Service Mesh?

A service mesh is a configurable infrastructure and network layer for microservices applications that enables efficient interaction between them and integrates all the functionality described above. A service mesh is normally implemented through a proxy instance, called a sidecar, that is added to each service instance. Sidecars do not affect the application code and abstract the service mesh functionality away from the microservices. This allows developers to concentrate on developing and maintaining the application, while OPs can manage the service mesh in the distributed environment.

The most popular service meshes are Linkerd, Envoy, and Istio. In this tutorial, we’ll discuss Istio Service Mesh launched by Google, IBM, and Lyft in 2017. Its architecture and features are discussed below.

Istio Service Mesh

Istio is a platform-independent service mesh that can run in a variety of environments including cloud, on-premise, Mesos, Kubernetes, and Consul. The platform allows creating a network of microservices with service-to-service authentication, monitoring, load balancing, traffic routing, and many other service mesh features described above. You can create the Istio service mesh for your microservices application by adding a special sidecar proxy that intercepts all network calls between your microservices and subjects them to Istio checks and user-defined traffic rules.

Two basic components of the Istio architecture include Data Plane and Control Plane (see the image below).

Data Plane

The data plane is based on a set of intelligent Envoy proxies deployed as sidecars to the relevant Service inside Pod(s) managed by this Service. Istio leverages such features of Envoy as dynamic service discovery, load balancing, TLS termination, circuit breakers, HTTP/2 and gRPC proxies, health checks, staged rollouts with percentage-based traffic splits, fault injection, and telemetry.

Control Plane

The control plane configures and manages Envoy proxies to route traffic to microservices. It is also used to configure Mixers. These general purpose policy and telemetry hubs can enforce access control and usage policies and collect metrics from proxies to guide their decisions. Mixers use request-level attributes extracted by proxies to create and manage their policy decisions.

In general, the Control Plane functionality involves the following:

  • Automatic load balancing for HTTP, WebSocket, TCP traffic, and gRPC.
  • Fine-grained control of traffic behavior including rich routing rules, circuit breakers, retries, failovers, and fault injection. Istio makes it easy to set up A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits.
  • A policy layer with support for access controls, rate limits, and quotas.
  • Creation of metrics, logs, nd traces for all Ingress and Egress traffic within your cluster. Istio’s custom dashboard provides with valuable insights into the performance and health of your services.
  • Strong identity-based authentication and authorization, and encryption of service communication at scale to secure your applications.

Source: Istio Documentation

In addition to Mixers discussed above, this functionality is implemented by the following Istio components:

  • Pilot. As the core component used for traffic management in Istio, Pilot configures and manages traffic routing and service discovery for Envoy sidecar, and it ensures resiliency through such failure recovery features as timeouts, retries, circuit breakers, among others.
  • Citadel. This is a security component of Istio that offers strong service-to-service and user authentication and ships with built-in identity and credentials management. Citadel can also be used to encrypt traffic in the Istio service mesh.
  • Galley. This is component that is responsible for validating user-created Istio API configuration on behalf of other Istio Control Plane components.

All these features make Istio a powerful infrastructure layer for microservices running in your Kubernetes cluster. Istio’s traffic management, security, access control, and failover management capabilities make it an indispensable component of modern cloud-native applications.

In what follows, we’ll guide you through installing Istio and its components in the local Minikube cluster. By the end of this tutorial, you’ll have Istio installed and configured on your infrastructure and understand how to use basic traffic routing capabilities of this service mesh. Let’s get started!

Tutorial

In order to test the examples presented below, the following prerequisites are required:

  • A running Kubernetes cluster. See Supergiant documentation for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Note: all examples in this tutorial assume that you are using Minikube cluster deployed on a local machine. To run Istio locally, you’ll need Minikube, version 0.28.0 or later.

Step 1: Prepare Minikube for Istio

In order to install Istio’s control plane add-ons and other applications for telemetry, Istio documentation recommends starting Minikube with 8192 MB  of memory and 4 CPUs :

For Kubernetes version > 1.9, you should start Minikube with the following configuration (see the code below).

Note: Please, replace - -vm-driver=your_vm_driver_choice  with your preferred VM driver option for Minikube (e.g virtualbox). For available VM drivers for Minikube, consult the official Minikube documentation.

 

minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.9.4 \

    --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" \

    --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key" \

    --extra-config=apiserver.admission-control="NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" \

    --vm-driver=`your_vm_driver_choice`

 

Upon running this command, Minikube will be configured for Istio installation.

After starting Minikube you might also consider removing taints applied to Minikube beceause they can affect Istio Pods scheduling. To remove taints, run the following command:

kubectl taint node minikube node-role.kubernetes.io/master:NoSchedule-

 

 

Step 2: Install Istio on your Minikube Cluster

The first thing you need to do to install Istio on Minikube is to get the latest release of Istio containing various CRDs, YAML manifests, and Istio command line tools. To get the latest release of Istio for MacOS and Linux, run the following command:

 

curl -L https://git.io/getLatestIstio | sh -

Next, you need to go to the Istio package directory. For example, if the downloaded package is named istio-1.0.5 run:

cd istio-1.0.5

If you run ls  command inside the package, you’ll see the following assets:

  • Installation .yaml  files for Kubernetes in the install/  directory.
  • Sample applications in the samples/  folder. We’ll use one of these applications later to demonstrate Istio traffic management features.
  • The bin/  directory with the istioctl  client binary. We’re going to use this binary to manually inject Envoy as a sidecar proxy and to create routing rules and policies.
  • The istio.VERSION  configuration file.

To use the istioctl  client, you have to add its path to the PATH environment variable. On macOS and Linux, run:

 

export PATH=$PWD/bin:$PATH

 

The installation directory stores Istio’s Custom Resource Definitions used to install Istio. CRDs allow extending the Kubernetes API with your custom resources. Istio extensively uses CRDs to create its own API on top of Kubernetes. We can install Istio’s CRDs by running the following command:

 

kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

 

customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" created

customresourcedefinition.apiextensions.k8s.io "policies.authentication.istio.io" created

customresourcedefinition.apiextensions.k8s.io "meshpolicies.authentication.istio.io" created

customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" created

customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" created

....

In several seconds, multiple Istio’s CRDs will be committed to the kube-apiserver. Let’s move on!

 

Step 3: Install Istio’s Core Components

There are several options for installing Istio’s core components described in the Istio’s Quick Guide for Kubernetes. We will install Istio with default mutual TLS authentication between sidecars, which is enough for demonstration purposes. In the production environment, however, you should opt for installing Istio using the Helm chart, which allows for more control and customization of Istio in your Kubernetes cluster.

To install Istio with the default mutual TLS authentication between sidecars, run

 

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

 

Awesome! You have Istio and its core components like Pilot, Citadel, and Envoy installed in your Kubernetes cluster. We are ready to use Istio to create a service mesh!

Step 4: Deploy Bookinfo Application

Now that Istio and its core components are installed, we will demonstrate how the service mesh works using the Bookinfo sample application from the package’s /samples  folder. The Bookinfo application displays book information similarly to catalog entries in online bookstores. Each book entry features a book description, book meta details (ISBN, page count), and a few book reviews (with or without ratings).

The Bookinfo is a typical example of the microservices application that screams to be managed with Istio.

Why is this so? The app is broken into four separate microservices (productpage, details, reviews, and ratings), each written in a different programming language. The “productpage” microservice is written in Python; the “details” microservices — in Ruby; the “reviews” microservice — in Java; and the “ratings” microservice — in Node.js. In addition, there are three versions of the reviews  microservice. The versions differ in how they display ratings and whether they call the ratings service. Managing this homogeneity definitely requires a service mesh that can connect loosely coupled microservices together and route traffic between different versions of the same microservice.

To apply service mesh capabilities to the Bookinfo app, we don’t need to change anything in its code. All we need to do is to enable the Istio environment by injecting Envoy sidecars alongside each microservice described above. Once injected, each Envoy sidecar will intercept incoming and outgoing traffic to the microservices and provide hooks needed for the Istio Control Plane (see the blog’s intro) to enable traffic routing, load balancing, telemetry, and access control for this application.

Before deploying the Bookinfo app, let’s first look at the contents of thebookinfo.yaml  file that contains all manifests:

 

 

# Copyright 2017 Istio Authors

#

#   Licensed under the Apache License, Version 2.0 (the "License");

#   you may not use this file except in compliance with the License.

#   You may obtain a copy of the License at

#

#       http://www.apache.org/licenses/LICENSE-2.0

#

#   Unless required by applicable law or agreed to in writing, software

#   distributed under the License is distributed on an "AS IS" BASIS,

#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

#   See the License for the specific language governing permissions and

#   limitations under the License.

 

##################################################################################################

# Details service

##################################################################################################

apiVersion: v1

kind: Service

metadata:

  name: details

  labels:

    app: details

spec:

  ports:

  - port: 9080

    name: http

  selector:

    app: details

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: details-v1

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: details

        version: v1

    spec:

      containers:

      - name: details

        image: istio/examples-bookinfo-details-v1:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

##################################################################################################

# Ratings service

##################################################################################################

apiVersion: v1

kind: Service

metadata:

  name: ratings

  labels:

    app: ratings

spec:

  ports:

  - port: 9080

    name: http

  selector:

    app: ratings

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: ratings-v1

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: ratings

        version: v1

    spec:

      containers:

      - name: ratings

        image: istio/examples-bookinfo-ratings-v1:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

##################################################################################################

# Reviews service

##################################################################################################

apiVersion: v1

kind: Service

metadata:

  name: reviews

  labels:

    app: reviews

spec:

  ports:

  - port: 9080

    name: http

  selector:

    app: reviews

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: reviews-v1

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: reviews

        version: v1

    spec:

      containers:

      - name: reviews

        image: istio/examples-bookinfo-reviews-v1:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: reviews-v2

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: reviews

        version: v2

    spec:

      containers:

      - name: reviews

        image: istio/examples-bookinfo-reviews-v2:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: reviews-v3

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: reviews

        version: v3

    spec:

      containers:

      - name: reviews

        image: istio/examples-bookinfo-reviews-v3:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

##################################################################################################

# Productpage services

##################################################################################################

apiVersion: v1

kind: Service

metadata:

  name: productpage

  labels:

    app: productpage

spec:

  ports:

  - port: 9080

    name: http

  selector:

    app: productpage

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: productpage-v1

spec:

  replicas: 1

  template:

    metadata:

      labels:

        app: productpage

        version: v1

    spec:

      containers:

      - name: productpage

        image: istio/examples-bookinfo-productpage-v1:1.8.0

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 9080

---

The manifests above contain one Deployment and one Service for each microservice of the app and for all three versions of the reviews microservice. To install the Bookinfo app, we will be using manual sidecar injection that adds an Envoy sidecar exposing Istio capabilities to each microservices of the app. We should use the istioctl kube-inject  command to manually modify the bookinfo.yaml  file before creating the Deployments:

 

$ kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)

service "details" created

deployment.extensions "details-v1" created

service "ratings" created

deployment.extensions "ratings-v1" created

service "reviews" created

deployment.extensions "reviews-v1" created

deployment.extensions "reviews-v2" created

deployment.extensions "reviews-v3" created

service "productpage" created

 

Alternatively, you can deploy the app by enabling the automatic sidecar injection . In this case, you should label the default namespace with istio-injection=enabled

 

kubectl label namespace default istio-injection=enabled

 

And then simply deploy the app using kubectl :

 

kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

 

Note: Automatic sidecar injection requires Kubernetes 1.9 or later.

Both commands will launch four microservices and start all three versions of the reviews service. In a realistic scenario, you would need to deploy new versions of a microservice over time instead of deploying them simultaneously.

Now, let’s see if the Bookinfo Services were successfully created:

 

kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.108.25.70     <none>        9080/TCP   1h

productpage   ClusterIP   10.108.40.126    <none>        9080/TCP   1h

ratings       ClusterIP   10.110.139.206   <none>        9080/TCP   1h

reviews       ClusterIP   10.108.208.78    <none>        9080/TCP   1h

 

and if the Bookinfo Pods are running:

AME                              READY     STATUS    RESTARTS   AGE

 

details-v1-7cbb4f55dd-rgf4q       2/2       Running   4          2h

productpage-v1-68f984bc98-kkzsw   2/2       Running   4          2h

ratings-v1-b6797d7dd-sdkxn        2/2       Running   4          2h

reviews-v1-7fd69f69f-fzc2z        2/2       Running   4          2h

reviews-v2-f7b45b5c6-cq42r        2/2       Running   4          2h

reviews-v3-77cfc9cdfc-wdj6d       2/2       Running   4          2h

Cool! The Bookinfo app was successfully deployed. Now, let’s configure the Istio Gateway to make the app available from outside of your cluster.

Step 5: Enable Istio Gateway

An Istio Gateway configures a load balancer for HTTP/TCP traffic at the edge of the service mesh and enables Ingress traffic for an application. Essentially, we need an Istio Gateway to make our applications accessible from outside of the Kubernetes cluster. After enabling the gateway, users can also use standard Istio rules to control HTTP(s) and TCP traffic entering a Gateway  by binding aVirtualService  to it.

We can define the Ingress gateway for the Bookinfo application using the sample gateway configuration located in thesamples/bookinfo/networking/bookinfo-gateway.yaml . The file contains the following manifests for the Gateway and VirtualService:

 

apiVersion: networking.istio.io/v1alpha3

kind: Gateway

metadata:

  name: bookinfo-gateway

spec:

  selector:

    istio: ingressgateway # use istio default controller

  servers:

  - port:

      number: 80

      name: http

      protocol: HTTP

    hosts:

    - "*"

---

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: bookinfo

spec:

  hosts:

  - "*"

  gateways:

  - bookinfo-gateway

  http:

  - match:

    - uri:

        exact: /productpage

    - uri:

        exact: /login

    - uri:

        exact: /logout

    - uri:

        prefix: /api/v1/products

    route:

    - destination:

        host: productpage

        port:

          number: 9080

The Gateway manifest simply creates an Istio gateway for all incoming HTTP traffic for all hosts. To make the Gateway work for our Bookinfo application, we also bind a VirtualService with a list of all microservices routes to the Gateway.

In essence, a VirtualService is Istio’s abstraction that defines a set of rules that control how requests for a given microservice are routed within an Istio service mesh. We can use virtual services to route requests to different versions of the same microservice or to a completely different microservice than was requested. We bound a VirtualService to a given Gateway by specifying the gateway’s name in the gateways  field in the configuration (see the manifest above). Now that you understand how Gateways and VirtualServices work, let’s enable them by running the following command:

$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

gateway.networking.istio.io "bookinfo-gateway" created

virtualservice.networking.istio.io "bookinfo" created

 

Confirm that the gateway has been created by running the following command:

$ kubectl get gateway

 

NAME               AGE

bookinfo-gateway   22s

 

Step 6: Set the INGRESS_HOST and INGRESS_PORT Variables for Accessing the Gateway

The next step is setting the INGRESS_HOST  and INGRESS_PORT  variables for accessing the gateway. First, you need to determine if your cluster is running in an environment with external load balancers. To check this, run the following command:

 

$ kubectl get svc istio-ingressgateway -n istio-system

 

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                   AGE

istio-ingressgateway   LoadBalancer   10.102.130.14   <pending>     8

 

If the EXTERNAL-IP  value is <pending> or <none> , the environment does not provide an external load balancer for the Ingress gateway. This is what we expect when running this tutorial on Minikube. In this case, you can access the Istio Gateway using the Service’s NodePort.

If you don’t have external load balancers, set the Ingress ports running the following command:

 

$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio

 

Setting the ingress IP depends on the cluster provider. For the Minikube, we use:

export INGRESS_HOST=$(minikube ip)

 

Let’s see if the environmental variable was created:

 

printenv GATEWAY_URL

192.168.99.100:31380

 

If you have an environment with external load balancers, you should follow the instructions here.

Awesome! Let’s now confirm that the Bookinfo application is running:

$ curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

200

Because we used the Istio Gateway and the VirtualService bound to it, you can also access the Bookinfo application in your browser by visitinghttp://$GATEWAY_URL/productpage . Here is how it looks like:

 

Try to refresh the Product Page several times, and you’ll notice that different versions of reviews microservice are displayed. One version has no stars, and other one have stars with different colors (red and black):

Step 7: Set Default Destination Rules

The first thing we need to do to implement version routing with istio is to define subsets in destination rules.

Subsets are actually different versions of the application binary. These can be different API versions of the app or iterative changes to the same service deployed in different environments (staging, prod, dev, etc.). Subsets can be used for various scenarios such as A/B testing and canary rollouts. The choice of version to display can be decided based on headers, URL, and weights assigned to each version (see our blog about traffic splitting in Traefik for more information about traffic weights).

In its turn, a destination refers to the network addressable service to which the request/connection will be sent after processing a routing rule. The service in the service registry (e.g., Kubernetes Services, Consult services) to which the traffic routed should be referred in the destination.host field.

In what follows, we create default destination rules for the Bookinfo services. The destination rules manifest looks as follows:

 

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

  name: productpage

spec:

  host: productpage

  trafficPolicy:

    tls:

      mode: ISTIO_MUTUAL

  subsets:

  - name: v1

    labels:

      version: v1

---

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

  name: reviews

spec:

  host: reviews

  trafficPolicy:

    tls:

      mode: ISTIO_MUTUAL

  subsets:

  - name: v1

    labels:

      version: v1

  - name: v2

    labels:

      version: v2

  - name: v3

    labels:

      version: v3

---

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

  name: ratings

spec:

  host: ratings

  trafficPolicy:

    tls:

      mode: ISTIO_MUTUAL

  subsets:

  - name: v1

    labels:

      version: v1

  - name: v2

    labels:

      version: v2

  - name: v2-mysql

    labels:

      version: v2-mysql

  - name: v2-mysql-vm

    labels:

      version: v2-mysql-vm

---

apiVersion: networking.istio.io/v1alpha3

kind: DestinationRule

metadata:

  name: details

spec:

  host: details

  trafficPolicy:

    tls:

      mode: ISTIO_MUTUAL

  subsets:

  - name: v1

    labels:

      version: v1

  - name: v2

    labels:

      version: v2

---

Since we deployed Istio with default mutual TLS authentication, we need to execute the following command:

$ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml

destinationrule.networking.istio.io "productpage" created

destinationrule.networking.istio.io "reviews" created

destinationrule.networking.istio.io "ratings" created

destinationrule.networking.istio.io "details" created

 

You’ll need to wait a few seconds for the destination rules to be enabled. Then, you can display the rules with the following command:

 

$ kubectl get destinationrules -o yam

 

Step 8: Implement Request Routing

Now, we are ready to change the default round-robin behavior for traffic routing. In this example, we will route all traffic to v1  (version 1) of the ratings service. Then, we will route traffic based on the value of an HTTP request header.

In order to route only to one version, we need to apply newVirtualServices that set the default version for the microservices. In this example, our virtual service will route to the v1 of all microservices in the application. The manifest looks as follows:

 

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: productpage

spec:

  hosts:

  - productpage

  http:

  - route:

    - destination:

        host: productpage

        subset: v1

---

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: reviews

spec:

  hosts:

  - reviews

  http:

  - route:

    - destination:

        host: reviews

        subset: v1

---

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: ratings

spec:

  hosts:

  - ratings

  http:

  - route:

    - destination:

        host: ratings

        subset: v1

---

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: details

spec:

  hosts:

  - details

  http:

  - route:

    - destination:

        host: details

        subset: v1

---

As you see, we have four VirtualServices  for each microservice: details, ratings, reviews, and productpage. Each virtual service routes traffic to the v1 of the microservice. This is specified in the destination property that points to a specific subset defined in the destination rules we created above. For example

- destination:

        host: details

        subset: v1

 

To apply the VirtualServices, run the following command and wait a few seconds:

$ kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

virtualservice.networking.istio.io "productpage" created

virtualservice.networking.istio.io "reviews" created

virtualservice.networking.istio.io "ratings" created

virtualservice.networking.istio.io "details" created

Now, you can display the defined routes with the following command

 

kubectl get virtualservices -o yaml

You can also see the corresponding subset definitions by running:

 

$ kubectl get destinationrules -o yaml

Awesome! We have configured Istio to route to the v1 of the Bookinfo microservices. You can test the configuration by refreshing the /product page several times. You’ll see that each time you refresh the app the same version of reviews microservice with no rating stars is displayed no matter how many times you refresh. This implies that the traffic routing is configured to route all traffic to the version reviews:v1 which does not have the star rating service.

Step 9: Enabeg User-Based Routing

In the next example, we will implement a user-based traffic routing. Requests from a specific user will be routed to a specific service version. For example, all traffic from John will be routed to the service reviews:v2 , and all traffic from the user Mary will be routed to reviews:v1.

We will implement this functionality by adding a custom end-user  header to all outbound HTTP requests to the reviews service.

Let’s take a look at the virtual service manifest to understand how this works:

apiVersion: networking.istio.io/v1alpha3

kind: VirtualService

metadata:

  name: reviews

spec:

  hosts:

    - reviews

  http:

  - match:

    - headers:

        end-user:

          exact: john

    route:

    - destination:

        host: reviews

        subset: v2

  - match:

    - headers:

        end-user:

          exact: mary

    route:

    - destination:

        host: reviews

        subset: v3  

  - route:

    - destination:

        host: reviews

        subset: v1

 

This manifest checks the request header and if its name is “end-user” and the contents matches “john,” and then the request is routed to the reviews:v2 version. If the request header matches “mary,” all requested are routed to the reviews:v3. In all other cases, the requests are routed to the first version.

Enable these rules by running:

 

$ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

virtualservice.networking.istio.io "reviews" configured

 

Let’s confirm that the rules have been applied. First, log in to the Bookinfo app as john  (use random password) and refresh the browser. You’ll see that black stars ratings appear next to each review no matter how often you refresh the browser. Next, log in as mary . Now, if you refresh the page, you’ll see red star rating displayed by the v2 of the reviews microservice. Finally, if you log in as any other user or don’t log in at all, the v1 of the reviews microservice with no ratings will be displayed. It’s as simple as that! We have successfully configured Istio to route traffic based on user identity.

 

Step 10: Clean Up

Now, as the tutorial is over let’s clean up after ourselves:

Delete Bookinfo App:

Remove the application virtual services:

 

$ kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml

Delete the routing rules and terminate the application pods:

$ samples/bookinfo/platform/kube/cleanup.sh

 

If you wish, you can also delete Istio from the cluster.

If you installed Istio with istio-demo.yaml  run:

$ kubectl delete -f install/kubernetes/istio-demo.yaml

 

If you installed Istio with istio-demo-auth.yaml run:

$ kubectl delete -f install/kubernetes/istio-demo-auth.yaml

 

You can also delete all Istio CRDs if needed:

$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n

 

 

 

 

 

'나는 노동자 > MSA' 카테고리의 다른 글

code-server  (0) 2021.08.12
redmine plugin check  (0) 2019.03.16


'나는 노동자 > MSA' 카테고리의 다른 글

code-server  (0) 2021.08.12
istio-service training  (0) 2019.04.01


accepted
To see the file size of your containers, you can use the -s argument of docker ps:

docker ps -s

Posting this as an answer because my comments above got hidden:

List the size of a container:

du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" <container_name>`
List the sizes of a container's volumes:

docker inspect -f "{{.Volumes}}" <container_name> | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
Edit: List all running containers' sizes and volumes:

for d in `docker ps -q`; do
d_name=`docker inspect -f {{.Name}} $d`
echo "========================================================="
echo "$d_name ($d) container size:"
sudo du -d 2 -h /var/lib/docker/devicemapper | grep `docker inspect -f "{{.Id}}" $d`
echo "$d_name ($d) volumes:"
docker inspect -f "{{.Volumes}}" $d | sed 's/map\[//' | sed 's/]//' | tr ' ' '\n' | sed 's/.*://' | xargs sudo du -d 1 -h
done

'나는 노동자 > DOCKER' 카테고리의 다른 글

docker tls 접속  (0) 2021.03.08
openssl centos Dockerfile  (0) 2019.11.01
Runtime directory and storage driver  (0) 2018.07.11
cgroup-driver 변경하기  (0) 2018.07.11
systemctl start docker error시  (0) 2018.03.23

## 명령어 줄이기 ##
alias k=kubectl
echo "alias k=kubectl" >> ~/.bashrc
source <(kubectl completion bash| sed s/kubectl/k/g)

k version --short

npm install -g gulp설치시
위같틍 에라가 나오면

npm config set strict-ssl false 하고 다시 해보자

npm config set strict-ssl true

How To Change Log Rate Limiting In Linux
Posted by Jarrod on March 23, 2016 Leave a comment (0)Go to comments
By default in Linux there are a few different mechanisms in place that may rate limit logging. These are primarily the systemd journal and rsyslog rate limits that are in place by default.

Here we cover modifying or removing rate limiting for logging.


Why Rate Limiting?

Rate limitations on logging are in place to prevent logging from using excessive levels of system resources. To log an event, it needs to be written to disk which uses system resources. If there are too many of these events coming in that need to be recorded to disk they can overwhelm a system and cause more important services to respond slowly or fail.

For this reason it is generally not recommended to completely disable rate limiting, but to tweak it as required. At the same time we do not want to drop important messages that may be required to generate a critical alert, so a balance needs to be found.

Systemd Journal Rate Limiting

How do we know if the journal limits are actually causing us to drop log messages? Generally you will see similar messages in the log files as below.

Jan 9 09:18:07 server1 journal: Suppressed 7124 messages from /system.slice/named.service
In this particular case we have a DNS server running Bind which is logging all DNS queries. 7124 messages were suppressed and dropped (not logged) because they were coming in too fast in this example.

By default systemd allows 1,000 messages within a 30 second period.

The limits are controlled in the /etc/systemd/journald.conf file.

RateLimitInterval=30s
RateLimitBurst=1000
If more messages than the amount specified in RateLimitBurst are received within the time defined by RateLimitInterval, all further messages within the interval are dropped until the interval is over.

You can modify these values as you see fit, you can completely disable systemd journal logging rate limiting by setting both to 0.

If you make any changes to /etc/systemd/journald.conf you will need to restart the systemd-journald service to apply the changes.

systemctl restart systemd-journald
Rsyslog Rate Limiting

The systemd journal limit is hit before any default rsyslog limits as its default limits are smaller. By default rsyslog will accept 20,000 messages within a 10 minute period.

Therefore if you increase the rate limiting of the systemd journal logging as shown above you may then start to receive similar messages in your syslog logs as shown below.

....
Jan 9 22:42:35 server1 rsyslogd-2177: imjournal: begin to drop messages due to rate-limiting
Jan 9 22:51:26 server1 rsyslogd-2177: imjournal: 143847 messages lost due to rate-limiting
...
The first message states that messages will be dropped as the limit has been reached, and once the interval is over (after 10 minutes by default) the amount of messages that were lost due to rate limiting will then be logged.

The limits are controlled in the /etc/rsyslog.conf file.

$ModLoad imjournal
$imjournalRatelimitInterval 600
$imjournalRatelimitBurst 20000
For further information see the imjournal rsyslog documentation.

Again you can modify these values as you like, and they can be completely disabled by setting both to 0.

If you make any changes to the /etc/rsyslog.conf file you will need to restart the rsyslog service to apply the changes.

systemctl restart rsyslog
Summary

As shown we can check our log files to find out if logs are being dropped due to either systemd journal or syslog rate limits. The systemd journal default rate limit is much lower than the syslog default rate limit so it will be triggered first. Once you increase the rate limiting on the systemd journal logging you may then start to experience additional rate limiting by syslog, which can then also be increased if required.

'나는 노동자 > LINUX' 카테고리의 다른 글

리눅스 임시 포트 오픈  (0) 2023.01.12
xfs volume extend  (0) 2020.07.10
repo_download and sync  (0) 2018.04.26
linux http_proxy 설정하기 (웹사용을 위해)  (0) 2018.04.19
CentOS 커널 업그레이드 다운그레이드  (0) 2018.04.19

docker run -it -d --name prometheus -p 9090:9090  -v /home/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml -v /home/prometheus:/prometheus dockertest2.io:12000/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --web.listen-address="0.0.0.0:9090" --web.enable-lifecycle




docker run -it -d --name prometheus -p 9090:9090  -v /home/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml -v /home/prometheus:/prometheus --link cadvisor:cadvisor dockertest2.io:12000/prometheus:latest --config.file=/etc/prometheus/prometheus.yml --web.listen-address="0.0.0.0:9090" --web.enable-lifecycle




---------- prometheus reload ----------------

t@workstation:/home/edu3/prometheus-2.3.2.linux-amd64# curl -X POST http://localhost:9090/-/reload

level=info ts=2018-07-13T05:25:49.600841981Z caller=main.go:603 msg="Loading configuration file" filename=prometheus.yml

level=info ts=2018-07-13T05:25:49.601703886Z caller=main.go:629 msg="Completed loading of configuration file" filename=prometheus.yml

--------------------------------------------------------------

prometheus.yml

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'docker'
         # metrics_path defaults to '/metrics'
         # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9323']


[참고용]

docker run -d -p 42047:9090 --name=prometheus -v /home/test/prometheus.yml:/etc/prometheus/prometheus.yml --link cadvisor:cadvisor prom/prometheus -config.file=/etc/prometheus/prometheus.yml -log.level=debug -storage.local.path=/prometheus -storage.local.memory-chunks=10000




[root@hadoopm KUBE]# more deployment-definition2.yml

apiVersion: apps/v1beta1

kind: Deployment

metadata:

  name: myapp-deployment

  labels:

    app: myapp

    type: front-end

spec:

  template:

    metadata:

      name: myapp-pod

      labels:

        app: myapp


    spec:

      containers:

        - name: nginx-container

          image: dockertest2.io:12000/nginx:latest


  replicas: 3

  selector:

    matchLabels:

      app: myapp




Algorithm: Random

SessionAffinity: Yes


minikube일 경우 minikube ip를 통해 ip를 알아내고 

해당 ip:30008로 통신한다


연결이 안되면 service-definition.yml에서  type: front-end를 삭제해보자... pod쪽에서 찾지를 못해  조건에 맞는 pod를 못찾아서 생긴 문제







'나는 노동자 > KUBERNETES' 카테고리의 다른 글

cluster upgrade process  (0) 2019.05.22
OS Upgrade drain cordon uncordon  (0) 2019.05.22
configmap,secret in pod  (0) 2019.05.21
kubernetes add nodes  (0) 2019.04.22
minikube 간단 설치  (0) 2018.04.26

Deployment create or apply -f 파일이름 실행시 아래와 같이 에러 메세지 출력 (참고 apiVersion: apps/v1)



[root@hadoopm KUBE]# kubectl apply -f deployment-definition.yml

Error from server (BadRequest): error when creating "deployment-definition.yml": Deployment in version "v1" cannot be handled as a Deployment: no kind "Deployment" is registered for version "apps/v1"



이럴때 우선 k8s 버젼확인

root@hadoopm KUBE]# kubectl  version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d


뭐 이렇게 하는게 맞는진 모르지만


암튼..


K8S가 1.9 밑에 버젼 1.8 이나 1.7 버젼이면 apiVersion: apps/v1beta1   <=요렇게 변경해준다 1.9이상이면 에러가 안나겠죵(apps/v1)


그럼 된다..


-----------------deployment-definition.yml ------------------------------

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: dockertest2.io:12000/nginx:latest
replicas: 3
selector:
matchLabels:
type: front-end

[root@hadoopm KUBE]# kubectl get deployments

NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

myapp-deployment   3         3         3            3           6m

nginx              1         1         1            1           5d


[root@hadoopm KUBE]# kubectl get replicaset

NAME                          DESIRED   CURRENT   READY     AGE

myapp-deployment-3571195553   3         3         3         6m

nginx-2496978322              1         1         1         5d


[root@hadoopm KUBE]# kubectl get pods

NAME                                READY     STATUS    RESTARTS   AGE

myapp-deployment-3571195553-27hgj   1/1       Running   0          7m

myapp-deployment-3571195553-7hzvr   1/1       Running   0          7m

myapp-deployment-3571195553-gw4xn   1/1       Running   0          7m



+ Recent posts