introduction

This commit is contained in:
huanqing.shao
2020-07-16 22:54:57 +08:00
parent 12bc841add
commit ba31fc7dda
8 changed files with 2236 additions and 1 deletions

View File

@ -411,6 +411,13 @@ let sidebar = {
]
},
'k8s-advanced/gc',
{
title: '自动伸缩',
collapsable: true,
children: [
'k8s-advanced/hpa/hpa',
]
},
{
title: '安全',
collapsable: true,

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

View File

@ -0,0 +1,37 @@
---
# vssueId: 66
layout: LearningLayout
description: Kubernetes_自动水平伸缩_Horizontal_Pod_Autoscaler
meta:
- name: keywords
content: Kubernetes 教程,Kubernetes 授权,Kubernetes RBAC,Kubernetes权限
---
# 自动伸缩
<AdSenseTitle/>
本文翻译自 Kubernetes 官网 [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)。
Horizontal Pod Autoscaler 根据观察到的 CPU 利用率(或某些由应用程序提供的 [custom metrics](https://git.k8s.io/community/contributors/design-proposals/instrumentation/custom-metrics-api.md)自动调整控制器Replication Controller / Deployment / ReplicaSet / StatefulSet的 Pod 的数量。Horizontal Pod Autoscaler 不能应用于不可伸缩的对象,例如 DaemonSet。
Horizontal Pod Autoscaler 被实现为 Kubernetes 中的一个 API 对象及一个 [控制器](/learning/k8s-bg/architecture/controller.html)。API 对象定义了控制器的行为;控制器则周期性地调整 Deployment或 Replication Contoller / ReplicaSet / StatefulSet中的 Pod 副本数replicas字段使其匹配用户在 API 对象中定义的平均 CPU 利用率。
## Horizontal Pod Autoscaler 的工作方式
<p>
<img src="./hpa.assets/horizontal-pod-autoscaler.png" style="width: 450px;"></img>
</p>
Horizontal Pod Autoscaler 被实现为一个控制循环,通过 controller manager 的参数 `--horizontal-pod-autoscaler-sync-period` 可以控制该循环的周期(默认值为 15 秒)。
在每个循环周期内controller manager 所有 HorizontalPodAutoscaler 对象中指定的度量信息metrics。查询的方式可以是通过 resource metrics API metrics-serverpod 的资源度量信息CPU/内存)或者 custom metrics API所有其他度量信息
* 如果 HorizontalPodAutoscaler 中指定了使用 pod 的资源度量(例如 CPU来说则 controller 从 resource metrics API (通常使用 metrics-server获取目标 Pod 的度量信息。如果 HorizontalPodAutoscaler 中指定的是资源利用率,则 controller 将度量值除以 Pod 中定义的容器的资源请求,得到一个以百分比表示的资源利用率;如果 HorizontalPodAutoscaler 中指定的是原始值,则直接使用从 resource metrics API 中获取的结果。此时contoller 将所有目标 Pod 的资源利用率(或原始值)求平均,并计算出一个比例,用于调整期望副本数的值。
> 请注意,如果某些 Pod 的容器没有设置 CPU 的 [资源请求](/learning/k8s-intermediate/config/computing-resource.html),则 controller 不能计算该 Pod 的 CPU 利用率contoller 也就不能 针对 HorizontalPodAutoscaler 中定义的 CPU 利用率执行任何操作。请参考 [算法](#算法) 章节,了解更多与之相关的内容。
* 对于 Pod 的自定义度量custom metricscontroller 的工作机制与上述过程相似,区别在于,自定义度量只支持原始值,不支持资源利用率的值。
*
## 算法

View File

@ -11,3 +11,13 @@ meta:
# Kubernetes Authentication LDAP
<AdSenseTitle/>
本文介绍了两部分内容,如果您已经有 LDAP 在使用,请直接进入文档的第二部分内容。
* 安装 OpenLDAP
> 仅用于配合此文档达成演示目的,部署到生产环境时,请参考 OpenLDAP 的官方网站
* 配置 Kubernetes/Kuboard 使用 OpenLDAP 登录
## 安装 OpenLDAP
## 配置 Kubernetes/Kuboard 使用 OpenLDAP 登录

View File

@ -48,4 +48,3 @@ https://github.com/NVIDIA/k8s-device-plugin#enabling-gpu-support-in-kubernetes @
* 当没有编辑权限时,只显示 预览YAML而不是 预览/编辑YAML
* 文件浏览器,显示隐藏文件

View File

@ -0,0 +1,87 @@
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "qingke7",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/qingke7",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "300m",
"memory": "4869932Ki"
}
},
{
"metadata": {
"name": "k8s",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "283m",
"memory": "3250712Ki"
}
},
{
"metadata": {
"name": "iz2ze0ephck4d1aw6rxk8gz",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/iz2ze0ephck4d1aw6rxk8gz",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "351m",
"memory": "4602996Ki"
}
},
{
"metadata": {
"name": "qingke0",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/qingke0",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "1219m",
"memory": "1240616Ki"
}
},
{
"metadata": {
"name": "qingke1",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/qingke1",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "181m",
"memory": "1004804Ki"
}
},
{
"metadata": {
"name": "qingke6",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/qingke6",
"creationTimestamp": "2020-06-30T07:54:33Z"
},
"timestamp": "2020-06-30T07:54:33Z",
"window": "1m0s",
"usage": {
"cpu": "152m",
"memory": "954652Ki"
}
}
]
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,76 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '1'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"prometheus-adapter","namespace":"monitoring"},"spec":{"replicas":1,"selector":{"matchLabels":{"name":"prometheus-adapter"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}},"template":{"metadata":{"labels":{"name":"prometheus-adapter"}},"spec":{"containers":[{"args":["--cert-dir=/var/run/serving-cert","--config=/etc/adapter/config.yaml","--logtostderr=true","--metrics-relist-interval=1m","--prometheus-url=http://prometheus-k8s.monitoring.svc:9090/","--secure-port=6443"],"image":"quay.io/coreos/k8s-prometheus-adapter-amd64:v0.5.0","name":"prometheus-adapter","ports":[{"containerPort":6443}],"volumeMounts":[{"mountPath":"/tmp","name":"tmpfs","readOnly":false},{"mountPath":"/var/run/serving-cert","name":"volume-serving-cert","readOnly":false},{"mountPath":"/etc/adapter","name":"config","readOnly":false}]}],"nodeSelector":{"kubernetes.io/os":"linux"},"serviceAccountName":"prometheus-adapter","volumes":[{"emptyDir":{},"name":"tmpfs"},{"emptyDir":{},"name":"volume-serving-cert"},{"configMap":{"name":"adapter-config"},"name":"config"}]}}}}
creationTimestamp: '2020-06-23T07:20:13Z'
generation: 1
labels: {}
name: prometheus-adapter
namespace: monitoring
resourceVersion: '49918'
selfLink: /apis/apps/v1/namespaces/monitoring/deployments/prometheus-adapter
uid: b801b8f2-3df5-40e0-829d-377c0db12350
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
name: prometheus-adapter
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: prometheus-adapter
spec:
containers:
- args:
- '--cert-dir=/var/run/serving-cert'
- '--config=/etc/adapter/config.yaml'
- '--logtostderr=true'
- '--metrics-relist-interval=1m'
- '--prometheus-url=http://prometheus-k8s.monitoring.svc:9090/'
- '--secure-port=6443'
image: 'quay.io/coreos/k8s-prometheus-adapter-amd64:v0.5.0'
imagePullPolicy: IfNotPresent
name: prometheus-adapter
ports:
- containerPort: 6443
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmpfs
- mountPath: /var/run/serving-cert
name: volume-serving-cert
- mountPath: /etc/adapter
name: config
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: prometheus-adapter
serviceAccountName: prometheus-adapter
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: tmpfs
- emptyDir: {}
name: volume-serving-cert
- configMap:
defaultMode: 420
name: adapter-config
name: config