2 Star 0 Fork 2

开源中国 / install-single-master-K8s

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
MulanPSL-1.0

install-single-master-K8s

介绍

两个命令从空白系统到单 master 节点 K8s 集群。

支持操作系统: CentOS 7+

支持 K8s 版本: v1.18.0+

软件架构

单 master 节点,使用 kubeadm 快速部署,使用 calico 作为 CNI 插件。生产环境 HA 部署见:Prod-K8S-HA-Installer

安装教程

1. 系统初始化

  • 脚本执行完成会自动重启服务器 (所有服务器都执行)
# 三个参数: $host_name $host_ip $name_server
# 主机名
host_name=k8s-w01
# 此处IP改为本机IP
host_ip=10.0.0.234
# dns服务器IP
name_server=192.168.1.60
bash pre_reboot.sh $host_name $host_ip $name_server



# 三个参数: $docker_version $kubernetes_version $docker_server
docker_version=20.10.7
kubernetes_version=1.21.3
docker_server=hub.gitee.cc
bash post_reboot.sh $docker_version $kubernetes_version $docker_server

注意: 如果已经存在kubernetes集群,执行完上面两个脚本之后可以加入到集群,命令如下:

在master节点上执行,可获取kubeadm join 命令及参数

kubeadm token create --print-join-command

kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303

在worker节点上执行如下

kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303

2. 初始化k8s并保存制品到harbor

  • 下载镜像 (master上执行)
## master查看所需的镜像
[root@k8smaster1 install-single-master-K8s]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
# 准备镜像
docker pull registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:v1.8.0
docker tag registry.cn-hangzhou.aliyuncs.com/k8sos/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/k8sos/coredns/coredns:v1.8.0


# 手动初始化时使用阿里云镜像
kubeadm init \
--control-plane-endpoint "kube-gitee-go.gitee.cc" \
--image-repository=registry.cn-hangzhou.aliyuncs.com/k8sos \
--kubernetes-version=v1.21.3 \
--service-cidr=10.96.0.0/12 \
--service-dns-domain "cluster.local" \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address "0.0.0.0" \
--apiserver-bind-port 6443 \
--upload-certs

将镜像上传到内网的harbor上

docker login http://hub.gitee.cc/ -uxx -pxxxx

# 使用脚本批量上传
sh push_all_images.sh

从内网仓库下载镜像同时使用脚本初始化集群

# 三个参数: 
# $control_plane_endpoint 
# $private_registry_host 
# $kubernetes_version

control_plane_endpoint="kube-gitee-go.gitee.cc:6443"
private_registry_host=hub.gitee.cc
kubernetes_version=v1.21.3

bash ./init_cluster.sh ${control_plane_endpoint} ${private_registry_host} ${kubernetes_version}


# 注释 脚本执行的如下命令
kubeadm init \
--control-plane-endpoint "kube-gitee-go.gitee.cc" \
--image-repository=hub.gitee.cc/google_containers \
--kubernetes-version=v1.21.3 \
--service-cidr=10.96.0.0/12 \
--service-dns-domain "cluster.local" \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address "0.0.0.0" \
--apiserver-bind-port 6443 \
--upload-certs

使用脚本init_cluster.sh初始化之后日志、calico 网络插件、和node脚本信息输出在/root/kube_src目录下

[root@k8smaster1 kube_src]# ll /root/kube_src/
total 44
-rw-r--r-- 1 root root 24920 Mar 27 09:20 calico-typha.yaml
-rw-r--r-- 1 root root  5589 Mar 27 09:19 kubeadm-init.log
-rw-r--r-- 1 root root   351 Mar 27 09:20 kubeadm-join-master.sh
-rw-r--r-- 1 root root   185 Mar 27 09:20 kubeadm-join-worker.sh
[root@k8smaster1 kube_src]# cat kubeadm-join-worker.sh
kubeadm join kube-apiserver-go.gitee.cc:6443 --token ivtmnz.5r5kyk7nin4put80     --discovery-token-ca-cert-hash sha256:e3501b4a21e8a59bed0f45001d2c106be8781673d1586b8401edb368e0d98833

在其他worker节点执行如上命令加入集群。

查看集群状态

[root@k8smaster1 kube_src]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8smaster1   Ready    master   3m44s   v1.19.9
k8snode1     Ready    <none>   54s     v1.19.9
k8snode2     Ready    <none>   53s     v1.19.9

增加命令提示

yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc 

#添加kubectl的k别名
vim   ~/.bashrc
alias k='kubectl'

#tab命令只在使用完整的kubectl 命令起作用,使用别名k 时不起作用,修补:
source <(kubectl completion bash | sed 's/kubectl/k/g') #写入 .bashrc
  • 如果初始化过程被中断可以使用下面命令来恢复
kubeadm reset
rm -rf  $HOME/.kube

3. 部署后检查

检查 master 初始化结果

# 只在 master 节点执行

# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
watch kubectl get pod -n kube-system -o wide

# 查看 master 节点初始化结果
kubectl get nodes -o wide

获得 join命令参数,在master 节点操作

# 只在 master 节点执行
kubeadm token create --print-join-command

有效时间
该 token 的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点。

4. 部署Kubernetes Extra Addons组件

4.1 部署metric-server组件

在任意master节点上执行kubectl top命令

#  kubectl top node
error: Metrics API not available
COPY
发现top指令无法取得Metrics,这表示Kubernetes 丛集没有安装Heapster或是Metrics Server 来提供Metrics API给top指令取得资源使用量。

安装metric-server组件

git clone https://gitee.com/cainiao555/metrics-server.git
kubectl create -f metrics-server/

4.2 部署Ingress Controller

方式1

# 只在 master 节点执行
kubectl apply -f https://kuboard.cn/install-script/v1.17.x/nginx-ingress.yaml
或者
kubectl apply -f https://kuboard.cn/install-script/v1.19.x/nginx-ingress.yaml

卸载

只在您想选择其他 Ingress Controller 的情况下卸载

# 只在 master 节点执行
kubectl delete -f https://kuboard.cn/install-script/v1.19.x/nginx-ingress.yaml

定制化ingress

# 如果打算用于生产环境,请参考 https://github.com/nginxinc/kubernetes-ingress/blob/v1.5.5/docs/installation.md 并根据您自己的情况做进一步定制

查看ingress运行状态

[root@k8smaster1 kube_src]# kubectl get pods --all-namespaces |grep nginx
nginx-ingress   nginx-ingress-rnsg2                        1/1     Running   0          53m
nginx-ingress   nginx-ingress-rxwvc                        1/1     Running   0          53m
[root@k8smaster1 kube_src]# kubectl get pod -n nginx-ingress
NAME                  READY   STATUS    RESTARTS   AGE
nginx-ingress-rnsg2   1/1     Running   0          54m
nginx-ingress-rxwvc   1/1     Running   0          54m

方式2

Ingress 是 Kubernetes 中的一个抽象资源,其功能是透过 Web Server 的 Virtual Host 概念以域名(Domain Name)方式转发到内部 Service,这避免了使用 Service 中的 NodePort 与 LoadBalancer 类型所带来的限制(如 Port 数量上限),而实现 Ingress 功能则是透过 Ingress Controller来达成,它会负责监听 Kubernetes API中的 Ingress 与 Service 资源物件,并在发生资源变化时,依据资源预期的结果来设定 Web Server。另外Ingress Controller 有许多实现可以选择:

  • Ingress NGINX: Kubernetes 官方维护的,也是本次安装使用的 Controller。
  • F5 BIG-IP Controller: F5 所开发的 Controller,它能够让管理员透过 CLI 或 API 从 Kubernetes 与 OpenShift 管理 F5 BIG-IP 设备。
  • Ingress Kong: 著名的开源 API Gateway 专案所维护的 Kubernetes Ingress Controller。
  • Træfik: 是一套开源的 HTTP 反向代理与负载平衡器。
  • Voyager: 一套以 HAProxy 为底的 Ingress Controller。

部署 ingress-nginx

wget https://cdn.jsdelivr.net/gh/kubernetes/ingress-nginx@controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx.yaml

sed -i -e 's#k8s.gcr.io/ingress-nginx#registry.cn-hangzhou.aliyuncs.com/kainstall#g' \
       -e 's#@sha256:.*$##g' ingress-nginx.yaml

kubectl apply -f ingress-nginx.yaml

kubectl wait --namespace ingress-nginx --for=condition=ready pods --selector=app.kubernetes.io/component=controller --timeout=60s
pod/ingress-nginx-controller-67848f7b-2gxzb condition met
COPY

官方默认加上了 admission 功能,而我们的 apiserver 使用宿主机的dns,不是coredns,所以连接不上 ingress-nginx Controller 的 service 地址,这里我们把 admission 准入钩子去掉,使我们创建 ingress 资源时,不去验证Controller。

admission webhook 的作用我简单的总结下,当用户的请求到达 k8s apiserver 后,apiserver 根据 MutatingWebhookConfigurationValidatingWebhookConfiguration 的配置,先调用 MutatingWebhookConfiguration 去修改用户请求的配置文件,最后会调用 ValidatingWebhookConfiguration 来验证这个修改后的配置文件是否合法。

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

5. 配置 Dashboard

Dashboard 是Kubernetes官方开发的基于Web的仪表板,目的是提升管理Kubernetes集群资源便利性,并以资源视觉化方式,来让人更直觉的看到整个集群资源状态。

部署 dashboard

wget https://cdn.jsdelivr.net/gh/kubernetes/dashboard@v2.2.0/aio/deploy/recommended.yaml -O dashboard.yaml
kubectl apply -f dashboard.yaml

部署 ingress

cat << EOF | kubectl apply -f -
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/secure-backends: 'true'
    nginx.ingress.kubernetes.io/backend-protocol: 'HTTPS'
    nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard 
spec:
  tls:
  - hosts:
    - kubernetes-dashboard.cluster.local
    secretName: kubernetes-dashboard-certs
  rules:
  - host: kubernetes-dashboard.cluster.local
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
EOF
COPY

创建 sa,使用 sa 的 token 进行登录 dashboard

kubectl create serviceaccount kubernetes-dashboard-admin-sa -n kubernetes-dashboard
kubectl create clusterrolebinding kubernetes-dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard-admin-sa -n kubernetes-dashboard

kubectl describe secrets $(kubectl describe sa kubernetes-dashboard-admin-sa -n kubernetes-dashboard | awk '/Tokens/ {print $2}') -n kubernetes-dashboard | awk '/token:/{print $2}'

eyJhbGciOiJSUzI1NiIsImtpZCI6IkZqLVpEbzQxNzR3ZGJ5MUlpalE5V1pVM0phRVg0UlhCZ3pwdnY1Y0lEZGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYS10b2tlbi1sOXhiaCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMxYWZiYmEyLTQyMzktNGM3Yy05NjBlLThiZTkwNDY0MzY5MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYSJ9.fbZodynYBF8QQOvwj_lzU1wxKiD0HE1CWiyAvp79y9Uu2uQerRMPEuT6KFwFLZ9Pj3be_HTbrDN88im3s-Q2ARpolSACRexMM_nJ2u4pc3MXNEf6e7AJUHB4JnbTsIn5RCSwA8kjYFlWKxX8s1Q8pSKUy_21aMYxuBaqPhzQiuu9RmPBmHkNSYWVncgiPqZWaaadI_l53Jj0KjTMLahG7fqVt2ioTp1ZsIZNaQdNdh8Gzn-SuFCIrNN5oR3bdWNyxbv0OGxrKBHqlVs_8V46ygBc1lyGfpKcA59Wq8-FtIc3zzx531Ix6fDvouJuqHsMxu9VCOFG5mjyYzdsQgemIA

COPY

获取 dashboard 的 ingres 连接地址

echo https://$(kubectl get node -o jsonpath='{range .items[*]}{ .status.addresses[?(@.type=="InternalIP")].address} {.status.conditions[?(@.status == "True")].status}{"\n"}{end}' | awk '{if($2=="True")a=$1}END{print a}'):$(kubectl get svc --all-namespaces -o go-template="{{range .items}}{{if eq .metadata.name \"ingress-nginx-controller\" }}{{range.spec.ports}}{{if eq .port "443"}}{{.nodePort}}{{end}}{{end}}{{end}}{{end}}")


https://192.168.77.145:37454

将 host 绑定后,使用token 进行登录

192.168.77.145 kubernetes-dashboard.cluster.local

https://kubernetes-dashboard.cluster.local:37454

6. 部署 whoami app

部署应用

cat <<EOF | kubectl apply -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-demo-app
  labels:
    app: ingress-demo-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ingress-demo-app
  template:
    metadata:
      labels:
        app: ingress-demo-app
    spec:
      containers:
      - name: whoami
        image: traefik/whoami:v1.6.1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-demo-app
spec:
  type: ClusterIP
  selector:
    app: ingress-demo-app
  ports:
    - name: http
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-demo-app
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: app.demo.com
    http:
      paths:
      - path: /
        backend:
          serviceName: ingress-demo-app
          servicePort: 80
EOF

获取应用的pods

# kubectl get pods -l app=ingress-demo-app
NAME                                READY   STATUS    RESTARTS   AGE
ingress-demo-app-694bf5d965-69v42   1/1     Running   0          68s
ingress-demo-app-694bf5d965-7qt5p   1/1     Running   0          68s

通过 ingress 访问

echo http://$(kubectl get node -o jsonpath='{range .items[*]}{ .status.addresses[?(@.type=="InternalIP")].address} {.status.conditions[?(@.status == "True")].status}{"\n"}{end}' | awk '{if($2=="True")a=$1}END{print a}'):$(kubectl get svc --all-namespaces -o go-template="{{range .items}}{{if eq .metadata.name \"ingress-nginx-controller\" }}{{range.spec.ports}}{{if eq .port "80"}}{{.nodePort}}{{end}}{{end}}{{end}}{{end}}")
http://192.168.77.145:40361


kubectl get pods -n ingress-nginx -l app.kubernetes.io/component=controller -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP           NODE               NOMINATED NODE   READINESS GATES
ingress-nginx-controller-67848f7b-sx7mf   1/1     Running   0          97m   10.244.5.6   k8s-worker-node3   <none>           <none>


# curl -H 'Host:app.demo.com' http://192.168.77.145:40361
Hostname: ingress-demo-app-694bf5d965-7qt5p
IP: 127.0.0.1
IP: 10.244.5.8
RemoteAddr: 10.244.5.6:38674
GET / HTTP/1.1
Host: app.demo.com
User-Agent: curl/7.64.0
Accept: */*
X-Forwarded-For: 10.244.5.1
X-Forwarded-Host: app.demo.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Real-Ip: 10.244.5.1
X-Request-Id: 90f14481aacd9ab5a1ef20d6113ddbe0
X-Scheme: http

从 whoami 应用返回单额信息可以看到,我们通过 ingress 访问到了 whomai app。

7. kubectl管理工具远程连接集群

Kubectl客户端工具上传到/usr/local/bin下
#在46机器上配置kubernetes的config
mkdir -p /root/.kube/

#在master上将admin.conf文件拷贝过去
scp /etc/kubernetes/admin.conf root@192.168.1.46:/root/.kube/

# 在46上操作
cd /root/.kube/ && mv admin.conf config

8. StorageClass

部署NFS服务器

yum -y install nfs-utils rpcbind

[root@Gitee-Go app]# cat /etc/exports
/data/nfs 192.168.1.0/24(rw,sync,insecure,no_subtree_check,no_root_squash)

service rpcbind restart
service nfs restart
showmount -e localhost

Node节点部署NFS客户端

yum -y install nfs-utils
systemctl restart nfs

# 测试到NFS服务器的连接
showmount -e 192.168.1.46

下载NFS存储分配器

git clone https://github.com.cnpmjs.org/kubernetes-retired/external-storage.git
cd external-storage/nfs-client/deploy
vim deployment.yaml

修改文件中的部分配置,然后保存。 deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: managed-nfs-storage
            - name: NFS_SERVER
              value: 192.168.1.46
            - name: NFS_PATH
              value: /data/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.46
            path: /data/nfs

class.yaml中的provisioner要与deployment.yaml中一致

class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: managed-nfs-storage # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

接下来,执行以下命令,创建NFS存储分配器的相关资源。

kubectl apply -f external-storage/nfs-client/deploy/
kubectl get deployment

8.1使用StorageClass

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

test-pod.yaml

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

查看集群中storageclass信息

$ k get sc
NAME   PROVISIONER     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cds1   csi-cdsplugin   Delete          Immediate           false                  5s

$ k describe storageclass cds1
Name:            cds1
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"cds1"},"parameters":{"paymentTiming":"Postpaid","reservationLength":"","storageType":"hdd"},"provisioner":"csi-cdsplugin","reclaimPolicy":"Delete"}

Provisioner:           csi-cdsplugin
Parameters:            paymentTiming=Postpaid,reservationLength=,storageType=hdd
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

在kubernetes1.20.1版本以上,出现一个bug

1.20.4版本,解决方法 /etc/kubernetes/manifests/kube-apiserver.yaml 添加”–- feature-gates=RemoveSelfLink=false”

参考文献: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25 https://stackoverflow.com/questions/65376314/kubernetes-nfs-provider-selflink-was-empty

9. 部署高可用版本kubernetes

参考文献:

ansible部署方式 https://gitee.com/oschina/CI-gitee-Docs/blob/master/install-k8s-ha.md

9.1 二级制方式部署

https://lework.github.io/2021/04/10/debian-kubeadm-install/#%E9%83%A8%E7%BD%B2kubernetes-extra-addons%E7%BB%84%E4%BB%B6

9.2 kubeadm方式部署

https://lework.github.io/2021/04/03/debian-kubeadm-install/#%E9%83%A8%E7%BD%B2-whoami-app

木兰宽松许可证, 第1版 木兰宽松许可证, 第1版 2019年8月 http://license.coscl.org.cn/MulanPSL 您对“软件”的复制、使用、修改及分发受木兰宽松许可证,第1版(“本许可证”)的如下条款的约束: 0. 定义 “软件”是指由“贡献”构成的许可在“本许可证”下的程序和相关文档的集合。 “贡献者”是指将受版权法保护的作品许可在“本许可证”下的自然人或“法人实体”。 “法人实体”是指提交贡献的机构及其“关联实体”。 “关联实体”是指,对“本许可证”下的一方而言,控制、受控制或与其共同受控制的机构,此处的控制是指有受控方或共同受控方至少50%直接或间接的投票权、资金或其他有价证券。 “贡献”是指由任一“贡献者”许可在“本许可证”下的受版权法保护的作品。 1. 授予版权许可 每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的版权许可,您可以复制、使用、修改、分发其“贡献”,不论修改与否。 2. 授予专利许可 每个“贡献者”根据“本许可证”授予您永久性的、全球性的、免费的、非独占的、不可撤销的(根据本条规定撤销除外)专利许可,供您制造、委托制造、使用、许诺销售、销售、进口其“贡献”或以其他方式转移其“贡献”。前述专利许可仅限于“贡献者”现在或将来拥有或控制的其“贡献”本身或其“贡献”与许可“贡献”时的“软件”结合而将必然会侵犯的专利权利要求,不包括仅因您或他人修改“贡献”或其他结合而将必然会侵犯到的专利权利要求。如您或您的“关联实体”直接或间接地(包括通过代理、专利被许可人或受让人),就“软件”或其中的“贡献”对任何人发起专利侵权诉讼(包括反诉或交叉诉讼)或其他专利维权行动,指控其侵犯专利权,则“本许可证”授予您对“软件”的专利许可自您提起诉讼或发起维权行动之日终止。 3. 无商标许可 “本许可证”不提供对“贡献者”的商品名称、商标、服务标志或产品名称的商标许可,但您为满足第4条规定的声明义务而必须使用除外。 4. 分发限制 您可以在任何媒介中将“软件”以源程序形式或可执行形式重新分发,不论修改与否,但您必须向接收者提供“本许可证”的副本,并保留“软件”中的版权、商标、专利及免责声明。 5. 免责声明与责任限制 “软件”及其中的“贡献”在提供时不带任何明示或默示的担保。在任何情况下,“贡献者”或版权所有者不对任何人因使用“软件”或其中的“贡献”而引发的任何直接或间接损失承担责任,不论因何种原因导致或者基于何种法律理论,即使其曾被建议有此种损失的可能性。 条款结束。 如何将木兰宽松许可证,第1版,应用到您的软件 如果您希望将木兰宽松许可证,第1版,应用到您的新软件,为了方便接收者查阅,建议您完成如下三步: 1, 请您补充如下声明中的空白,包括软件名、软件的首次发表年份以及您作为版权人的名字; 2, 请您在软件包的一级目录下创建以“LICENSE”为名的文件,将整个许可证文本放入该文件中; 3, 请将如下声明文本放入每个源文件的头部注释中。 Copyright (c) [2019] [name of copyright holder] [Software Name] is licensed under the Mulan PSL v1. You can use this software according to the terms and conditions of the Mulan PSL v1. You may obtain a copy of Mulan PSL v1 at: http://license.coscl.org.cn/MulanPSL THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. See the Mulan PSL v1 for more details. Mulan Permissive Software License,Version 1 Mulan Permissive Software License,Version 1 (Mulan PSL v1) August 2019 http://license.coscl.org.cn/MulanPSL Your reproduction, use, modification and distribution of the Software shall be subject to Mulan PSL v1 (this License) with following terms and conditions: 0. Definition Software means the program and related documents which are comprised of those Contribution and licensed under this License. Contributor means the Individual or Legal Entity who licenses its copyrightable work under this License. Legal Entity means the entity making a Contribution and all its Affiliates. Affiliates means entities that control, or are controlled by, or are under common control with a party to this License, ‘control’ means direct or indirect ownership of at least fifty percent (50%) of the voting power, capital or other securities of controlled or commonly controlled entity. Contribution means the copyrightable work licensed by a particular Contributor under this License. 1. Grant of Copyright License Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable copyright license to reproduce, use, modify, or distribute its Contribution, with modification or not. 2. Grant of Patent License Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, royalty-free, non-exclusive, irrevocable (except for revocation under this Section) patent license to make, have made, use, offer for sale, sell, import or otherwise transfer its Contribution where such patent license is only limited to the patent claims owned or controlled by such Contributor now or in future which will be necessarily infringed by its Contribution alone, or by combination of the Contribution with the Software to which the Contribution was contributed, excluding of any patent claims solely be infringed by your or others’ modification or other combinations. If you or your Affiliates directly or indirectly (including through an agent, patent licensee or assignee), institute patent litigation (including a cross claim or counterclaim in a litigation) or other patent enforcement activities against any individual or entity by alleging that the Software or any Contribution in it infringes patents, then any patent license granted to you under this License for the Software shall terminate as of the date such litigation or activity is filed or taken. 3. No Trademark License No trademark license is granted to use the trade names, trademarks, service marks, or product names of Contributor, except as required to fulfill notice requirements in section 4. 4. Distribution Restriction You may distribute the Software in any medium with or without modification, whether in source or executable forms, provided that you provide recipients with a copy of this License and retain copyright, patent, trademark and disclaimer statements in the Software. 5. Disclaimer of Warranty and Limitation of Liability The Software and Contribution in it are provided without warranties of any kind, either express or implied. In no event shall any Contributor or copyright holder be liable to you for any damages, including, but not limited to any direct, or indirect, special or consequential damages arising from your use or inability to use the Software or the Contribution in it, no matter how it’s caused or based on which legal theory, even if advised of the possibility of such damages. End of the Terms and Conditions How to apply the Mulan Permissive Software License,Version 1 (Mulan PSL v1) to your software To apply the Mulan PSL v1 to your work, for easy identification by recipients, you are suggested to complete following three steps: i. Fill in the blanks in following statement, including insert your software name, the year of the first publication of your software, and your name identified as the copyright owner; ii. Create a file named “LICENSE” which contains the whole context of this License in the first directory of your software package; iii. Attach the statement to the appropriate annotated syntax at the beginning of each source file. Copyright (c) [2019] [name of copyright holder] [Software Name] is licensed under the Mulan PSL v1. You can use this software according to the terms and conditions of the Mulan PSL v1. You may obtain a copy of Mulan PSL v1 at: http://license.coscl.org.cn/MulanPSL THIS SOFTWARE IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO NON-INFRINGEMENT, MERCHANTABILITY OR FIT FOR A PARTICULAR PURPOSE. See the Mulan PSL v1 for more details.

简介

两个命令从空白系统到单 master 节点 K8s 集群 -- Install a single master node K8s cluster on a blank system with two commands 展开 收起
MulanPSL-1.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
1
https://gitee.com/oschina/install-single-master-K8s.git
git@gitee.com:oschina/install-single-master-K8s.git
oschina
install-single-master-K8s
install-single-master-K8s
master

搜索帮助