基础准备

修改所有主机防火墙

1
systemctl stop firewalld && setenforce 0 && systemctl disable firewalld

修改所有主机名

根据ip修改

命令:

1
hostnamectl set-hostname hdss7-11.host.com

修改对应的网络环境

命令:

1
vi /etc/sysconfig/network-scripts/ifcfg-ens33

下面是内容

1
2
3
4
5
6
7
8
9
TYPE=Ethernet
BOOTPROTO=none
NAME=ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=10.4.7.200
NETMASK=255.255.255.0
GATEWAY=10.4.7.254
DNS1=10.4.7.254

yum安装epal源与wget

1
yum install wget -y && yum install epel-release -y

更新国内源

1
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo && yum makecache

安装必要的工具

1
yum install net-tools telnet tree nmap sysstat lrzsz dos2unix vim bind-utils -y

修改主机10.4.7.11

安装使用dns工具包服务bind9

1
yum install bind -y

修改bind的主配置文件

1
vim /etc/named.conf

修改监听地址

1
listen-on port 53 { 10.4.7.11; };

修改第二处

1
allow-query  { any; };

第三处 添加

1
forwarders { 10.4.7.254; };//往上查询

//recursion yes 代表bind 帮你递归查询dns 还有一种是迭代查询 这儿使用递归查询

第四处 修改

1
dnssec-enable no;

第五处 修改

1
dnssec-validation no;//主要目的节省资源

检查named文件是否配置正确 命令

1
named-checkconf

区域配置文件

1
vim /etc/named.rfc1912.zones
1
2
3
4
5
6
7
8
9
10
zone "host.com" IN {
    type master;
    file "host.com.zone";
    allow-update { 10.4.7.11; };
};
zone "od.com" IN {
    type master;
    file "od.com.zone";
    allow-update { 10.4.7.11; };
};

新建配置区域数据文件

1
vim /var/named/host.com.zone

host.com.zone配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ORIGIN host.com.
$TTL 600 ; 10 minutes
@   IN SOA dns.host.com. dnsadmin.host.com.(
    2020092801 ;serial
    10800 ; refresh(3hours)
    900 ;retry(15min)
    604800 ; expire(1 week)
    86400 ;mininum(1 day)
)
NS dns.host.com.
$TTL 60; 1minute
dns A   10.4.7.11
HDSS7-11    A   10.4.7.11
HDSS7-12    A   10.4.7.12
HDSS7-21    A   10.4.7.21
HDSS7-22    A   10.4.7.22
HDSS7-200    A   10.4.7.200

od.com.zone配置文件

1
2
3
4
5
6
7
8
9
10
11
12
$ORIGIN od.com.
$TTL 600 ; 10 minutes
@   IN SOA dns.od.com. dnsadmin.od.com.(
    2020092801 ;serial
    10800 ; refresh(3hours)
    900 ;retry(15min)
    604800 ; expire(1 week)
    86400 ;mininum(1 day)
)
NS dns.od.com.
$TTL 60; 1minute
dns A   10.4.7.11

启动named服务 systemctl start named 检查命令

1
dig -t A hdss7-21.hots.com @10.4.7.11 +short

修改文件10.4.7.11机子上的网络dns

1
vim /etc/sysconfig/network-scripts/ifcfg-ens33

将dns 10.4.7.254 改成10.4.7.11  然后重启网络 

1
systemctl restart network

即可

修改resolv.conf

1
vim /etc/resolv.conf

修改其他主机配置

然后依次配置每台机器的

1
vim /etc/sysconfig/network-scripts/ifcfg-ens33

将dns 10.4.7.254 改成10.4.7.11 然后重启网络

1
systemctl restart network

即可

修改外部的window主机服务器 将虚拟机v8的网卡修改dns 10.4.7.11 即可

host.com 主机域 od.com 业务域

如果想要ping短域名 如 ping hdss7-21 那么只需要在

1
vim /etc/resolv.conf

加一句 search host.com 即可

windows上想要解析改域名 需要在vmnet8的网卡上配置dns 10.4.7.11

准备签发证书环境(10.4.7.200(hdss7-200))

以下 准备签发证书环境

安装CFSSL:R1.2

证书工具包括 cfssl cfssl-json cfssl-certinfo

安装命令

1
2
3
4
5
6
7
8
wget https://pkg.cfssl.org/R1.2/cfssl\_linux-amd64 -O /usr/bin/cfssl && \\
wget https://pkg.cfssl.org/R1.2/cfssljson\_linux-amd64 -O /usr/bin/cfssl-json && \\
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo\_linux-amd64 -O /usr/bin/cfssl-certinfo && \\
chmod +x /usr/bin/cfssl\*




在opt 下创建certs文件夹 添加一个文件 ca-csr.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
    "CN":"XBO",
    "hosts":\[\],
    "key":{
        "algo":"rsa",
        "size":2048
    },
    "name":\[{
        "C":"CN",
        "ST":"sichuan",
        "L":"sichuan",
        "O":"xbo",
        "OU":"xbo"
        
    }\],
    "ca":{
        "expire":"175200h"
    }




}

CN:Common Name 浏览器使用该字段验证网站是否合法,一般写的是域名。

C:Country,国家、

ST:state,州 省

L:locality 城市,地区

O:Organization Name 组织名称,公司名称

OU:Organization Unit Name 组织单位名称 公司部门

执行命令  cfssl gencert -initca ca_csr.json |cfssl-json -bare ca

生成证书私钥及相关文件(如下)

ca.csr

ca_csr.json

ca-key.pem

ca.pem

部署docker环境

部署机器:HDSS7-200 HDSS7-21 HDSS7-22

部署命令

1
2
3
4
5
curl -fsSL https://get.docker.com |bash -s docker --mirror Aliyun




安装完毕后

1
mkdir /etc/docker/ && vim /etc/docker/daemon.json

添加 /etc/docker/daemon.json

1
2
3
4
5
6
7
8
9
10
{
    "graph":"/data/docker",
    "storage-driver":"overlay2",
    "insecure-registries":\["registry.access.redhat.com","quay.io","harbor.od.com"\],
    "registry-mirrors":\["https://q2gr04ke.mirror.aliyuncs.com"\],
    "bip":"172.7.21.1/24",
    "exec-opts":\["native.cgroupdriver=systemd"\],
    "live-restore":true
}

* 注意 bip那儿的bip是根据机器来的 200 就是172.7.200.1/24 其他机器依次类推

写入后systemctl daemon-reload 然后重启docker

创建私有docker镜像仓库(harbor)

部署 hdss7-200

在opt目录下创建src

去github 下载harbor

1
curl -O harbor-offline-installer-v2.1.0.tgz https://github.com/goharbor/harbor/releases/download/v2.1.0/harbor-offline-installer-v2.1.0.tgz

解压目录到/opt下

1
tar -xf harbor-offline-installer-v2.1.0.tgz -C /opt

会发现opt下面有harbor 文件夹 里面有代码

将harbor移动到harbor-v2.1.0 然后制作成软连接 

1
mv harbor/ harbor-v2.1.0/ && ln -s /opt/harbor-v2.1.0/ /opt/harbor

这样做有便于以后升级

进入/opt/harbor/ 编辑harbor.yml.tmpl

1
2
3
4
5
6
7
8
9
10
hostname:harbor.od.com
http:
    port:180
harbor\_admin\_password: Harbor12345
data\_volume: /data/harbor
log:
    level: info
    rotate\_count: 50
    rotate\_size: 200M
    location: /data/harbor/logs

*注意要把https相关注销掉

harbor 启动依赖docker-compose

1
yum install docker-compose -y

执行

1
./install.sh

然后等待

安装nginx 做反向代理

1
yum install nginx -y

配置nginx.conf

1
2
3
4
5
6
7
8
server{
    listen 80;
    server\_name harbor.od.com;
    client\_max\_body\_size 1000m;
    location /{
        proxy\_pass http://127.0.0.1:180;
    }
}
目前还不能域名访问 需要域名访问的话 那么需要再域名服务器(10.4.7.11 上配置)

修改

1
vim /var/named/od.com.zone

(1)修改记录号     2020092801 ;serial ==》变成    2020092802 ;serial

(2)添加A记录 harbor A 10.4.7.200

重启named服务

1
systemctl restart named

使用

1
dig -t A harbor.od.com +short

 查看是否有记录

使用harbor.od.com 访问即可

进入harbor.od.com 创建新的项目public

拉取nginx镜像 docker pull nginx:1.7.9 并且将镜像打上别名tag

将镜像推送到私有仓库

1
docker login harbor.od.com

 输入用户名admin 密码 Harbor12345

1
docker push harbor.od.com/public/nginx:latest

搭建Etcd数据库集群(7.12/7.21/7.22上)

添加etcd证书(CA) —在7.200上配置)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
cd /opt/certs/ && vim ca-config.json
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": \[
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                \]
            },
            "client": {
                "expiry": "175200h",
                "usages": \[
                    "signing",
                    "key encipherment",
                    "client auth"
                \]
            },
            "peer": {
                "expiry": "175200h",
                "usages": \[
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                \]
            }
        }
    }
}

*注释 peer指的服务端找客户端需要证书  客户端找服务端需要证书

创建etcd自签证书请求(csr)的json配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
vim etcd-peer-csr.json
{
    "CN": "k8s-etcd",
    "hosts": \[
        "10.4.7.11",
        "10.4.7.12",
        "10.4.7.21",
        "10.4.7.22"
    \],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": \[
        {
            "C":"CN",
            "ST":"sichuan",
            "L":"sichuan",
            "O":"xbo",
            "OU":"xbo"
        }
    \]
}

生成证书私钥及相关文件

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json --hostname="10.4.7.12,10.4.7.11,10.4.7.21,10.4.7.22" - | cfssl-json -bare etcd-peer

发现有文件

-rw-r–r–. 1 root root  800 9月  29 09:45 ca-config.json
-rw-r–r–. 1 root root  883 9月  28 11:13 ca.csr
-rw-r–r–. 1 root root  276 9月  28 11:11 ca_csr.json
-rw——-. 1 root root 1679 9月  28 11:13 ca-key.pem
-rw-r–r–. 1 root root 1123 9月  28 11:13 ca.pem
-rw-r–r–. 1 root root 1066 9月  29 09:52 etcd-peer.csr
-rw-r–r–. 1 root root  359 9月  29 09:49 etcd-peer-csr.json
-rw——-. 1 root root 1675 9月  29 09:52 etcd-peer-key.pem
-rw-r–r–. 1 root root 1306 9月  29 09:52 etcd-peer.pem

在7.12上建立etcd

建立对应的etcd用户

1
useradd -s /sbin/nologin -M etcd

useradd命令

参数说明:

-c<备注>  加上备注文字。备注文字会保存在passwd的备注栏位中。
-d<登入目录>  指定用户登入时的起始目录。
-D  变更预设值.
-e<有效期限>  指定帐号的有效期限。
-f<缓冲天数>  指定在密码过期后多少天即关闭该帐号。
-g<群组>  指定用户所属的群组。
-G<群组>  指定用户所属的附加群组。
-m  自动建立用户的登入目录。
-M  不要自动建立用户的登入目录。
-n  取消建立以用户名称为名的群组.
-r  建立系统帐号。
-s   指定用户登入后所使用的shell。
-u  指定用户ID。
然后查询 id etcd

下载etcd 软件

在/opt/src/下载 

1
wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz -O etcd-v3.1.20-linux-amd64.tar.gz

解压软件

1
tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt

然后 更名 mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20 并创建软连接  ln -s etcd-v3.1.20 etcd

1
mv etcd-v3.1.20-linux-amd64 etcd-v3.1.20 &&  ln -s etcd-v3.1.20 etcd

创建目录,拷贝证书、私钥

1
2
mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
chown -R etcd.etcd /opt/etcd/certs /data/etcd/ /data/logs/etcd-server

拷贝证书

在运维主机上生成的ca.pem etcd-peer-key.pem etcd-peer.pem 拷贝到/opt/etcd/certs目录下

注意私有文件权限600

1
2
3
4
sshpass -p xb198929 scp hdss7-200:/opt/certs/ca.pem hdss7-200:/opt/certs/etcd-peer.pem hdss7-200:/opt/certs/etcd-peer-key.pem .
scp hdss7-200:/opt/certs/ca.pem hdss7-200:/opt/certs/etcd-peer.pem hdss7-200:/opt/certs/etcd-peer-key.pem .
scp hdss7-200:/opt/certs/etcd-peer.pem .
scp hdss7-200:/opt/certs/etcd-peer-key.pem .

创建etcd启动文件

一个.sh的启动脚本 vim etcd-server-startup.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#!/bin/bash
./etcd  --name etcd-server-7-12 \\
        --data-dir /data/etcd/etcd-server \\
        --listen-peer-urls https://10.4.7.12:2380 \\
        --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \\
        --quota-backend-bytes 8000000000 \\
        --initial-advertise-peer-urls https://10.4.7.12:2380 \\
        --advertise-client-urls https://10.4.7.12:2379 \\
        --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \\
        --ca-file ./certs/ca.pem \\
        --cert-file ./certs/etcd-peer.pem \\
        --key-file ./certs/etcd-peer-key.pem \\
        --client-cert-auth \\
        --trusted-ca-file ./certs/ca.pem \\
        --peer-ca-file ./certs/ca.pem \\
        --peer-cert-file ./certs/etcd-peer.pem \\
        --peer-key-file ./certs/etcd-peer-key.pem \\
        --peer-client-cert-auth \\
        --peer-trusted-ca-file ./certs/ca.pem \\
        --log-output stdout

然后给该文件一个执行权限

1
chmod +x etcd-server-startup.sh

修改文件属主

1
2
3
4
5
6
7
chown -R etcd.etcd /opt/etcd-v3.1.20/
chown -R etcd.etcd /data/etcd/
chown -R etcd.etcd /data/logs/etcd-server/




启动etcd服务

1
./etcd-server-startup.sh

由于要守护该进程一直执行

则需要supervisor  yum install supervisor -y

然后运行

1
2
systemctl start supervisord
systemctl enable supervisord

创建supervisord启动文件(ini)

1
vim /etc/supervisord.d/etcd-server.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
\[program:etcd-server-7-21\]
command=/opt/etcd/etcd-server-startup.sh    ;
numprocs=1                                  ;
directory=/opt/etcd/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=etcd                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/etcd-server/etcd.stdout.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;

*注意:etcd集群各主机启动配置略有不同,配置其他节点时注意修改

更新supervisor 服务

1
supervisorctl update

查看日志

1
tail -fn 200 /data/logs/etcd-server/etcd.stdout.log

看看是否正常

在7.21、7.22上建立etcd

同上

三个etcd节点起来后 则表示正常

在7.21、7.22上建立etcd完毕

安装kubenetes(7.21-7.22)

下载kubenetes最新版本(注意是changelog里面 选择server下载)

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#downloads-for-v11810

下载解压到opt下

1
tar -xf kubernetes-server-linux-amd64.tar.gz -C /opt/

修改版本文件名并创建软连接

1
mv kubernetes/ kubernetes-v1.18.10 && ln -s kubernetes-v1.18.10/ kubernetes

创建文件

1
mkdir -p /data/logs/kubernetes/kube-apiserver

签发kubenetes apiserver证书(7.200)

签发client证书(用于kube-apiserver与etcd通讯用的)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
vim /opt/certs/client-csr.json
{
    "CN": "k8s-node",
    "hosts": \[
    \],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": \[
        {
            "C":"CN",
            "ST":"sichuan",
            "L":"sichuan",
            "O":"xbo",
            "OU":"xbo"
        }
    \]
}

生成证书命令

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json |cfssl-json -bare client

创建生成apiserver-csr.json 证书(apiserver启动需要该证书)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
vim /opt/certs/apiserver-csr.json
{
    "CN": "k8s-apiserver",
    "hosts": \[
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.4.7.10",
        "10.4.7.23",
        "10.4.7.21",
        "10.4.7.22"
    \],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": \[
        {
            "C":"CN",
            "ST":"sichuan",
            "L":"sichuan",
            "O":"xbo",
            "OU":"xbo"
        }
    \]
}

生成证书命令

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json |cfssl-json -bare apiserver

进入/opt/kubernetes/server/bin/ 创建cert文件夹

1
mkdir cert

远程拷贝6个文件(3套证书)到改目录下

apiserver.pem apiserver-key.pem ca.pem ca-key.pem client.pem client-key.pem

1
scp hdss7-200:/opt/certs/apiserver.pem hdss7-200:/opt/certs/apiserver-key.pem hdss7-200:/opt/certs/ca.pem hdss7-200:/opt/certs/ca-key.pem hdss7-200:/opt/certs/client.pem hdss7-200:/opt/certs/client-key.pem .

创建conf文件夹 mkdir conf (主要创建日志审计)

创建 audit.yaml

1
vim audit.yaml

创建apiserver 启动脚本

1
vim /opt/kubernetes/server/bin/kubernetes-apiserver.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash
./kube-apiserver \\
    --apiserver-count 2 \\
    --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \\
    --audit-policy-file ./conf/audit.yaml \\
    --authorization-mode RBAC \\
    --client-ca-file ./cert/ca.pem \\
    --requestheader-client-ca-file ./cert/ca.pem \\
    --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
    --etcd-cafile ./cert/ca.pem \\
    --etcd-certfile ./cert/client.pem \\
    --etcd-keyfile ./cert/client-key.pem \\
    --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \\
    --service-account-key-file ./cert/ca-key.pem \\
    --service-cluster-ip-range 192.168.0.0/16 \\
    --service-node-port-range 3000-29999 \\
    --target-ram-mb=1024 \\
    --kubelet-client-certificate ./cert/client.pem \\
    --kubelet-client-key ./cert/client-key.pem \\
    --log-dir /data/logs/kubernetes/kube-apiserver \\
    --tls-cert-file ./cert/apiserver.pem \\
    --tls-private-key-file ./cert/apiserver-key.pem \\
    --v 2




#注:RBAC role base access control(基于角色控制权限)

创建supervisord启动文件(kubernetes-apiserver)

1
vim /etc/supervisord.d/kube-apiserver.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
\[program:kube-server-7-21\]
command=/opt/kubernetes/server/bin/kubernetes-apiserver.sh    ;
numprocs=1                                  ;
directory=/opt/kubernetes/server/bin/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=etcd                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/kubernetes/kube-apiserver.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;

启动supervisorctl udpate

主机7-22同上

安装nginx做L4反向代理(7.11-7.12)

安装nginx

1
yum install nginx -y

修改nginx的配置文件

1
vim /etc/nginx/nginx.conf

在文件最后粘贴

1
2
3
4
5
6
7
8
9
10
11
12
stream{
     upstream kube-apiserver{
         server 10.4.7.21:6443  max\_fails=3 fail\_timeout=30s;
         server 10.4.7.22:6443  max\_fails=3 fail\_timeout=30s;
     }
     server{
         listen 7443;
         proxy\_connect\_timeout 2s;
         proxy\_timeout 900s;
         proxy\_pass kube-apiserver;
     }
}

安装keepalived

1
yum install keepalived -y

创建keepalived 脚本

1
vim /etc/keepalived/check\_port.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:
#在keepalived的配置文件中
#vrrp\_script check\_port {
#  script "/etc/keepalived/check\_port.sh 6379" 配置监听的端口
# interval 2 #检查监听的频率,单位(秒)    
#}
CHK\_PORT=$1
if \[ -n "$CHK\_PORT"\];then
    PORT\_PROCESS=$(ss -lt|grep $CHK\_PORT|wc -l)
    if \[ $PORT\_PROCESS -eq 0 \];then
        echo "Port $CHK\_PORT is not used."
        exit 1
    fi
else
    echo "Check Port Cant be Empty!"
fi

#注:这里脚本符号需要$()替代`

在7.11上配置keepalived.conf 配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global\_defs {
        router\_id 10.4.7.11
}
vrrp\_scipt chk\_nginx
{
   script "/etc/keepalived/check\_port.sh 7443"
   interval 2
   weight -20
}
vrrp\_instance VI\_1 {
    state MASTER
    interface eth0
    virtual\_router\_id 51
    priority 100
    advert\_int 1
    mcast\_src\_ip 10.4.7.11
    nopreempt
    authentication {
        auth\_type PASS
        auth\_pass 1111
    }
    track\_script {
        chk\_nginx
    }
    virtual\_ipaddress {
        10.4.7.10
    }




}

在7.12上配置keepalived.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
! Configuration File for keepalived
global\_defs {
        router\_id 10.4.7.12
}
vrrp\_scipt chk\_nginx
{
   script "/etc/keepalived/check\_port.sh 7443"
   interval 2
   weight -20
}
vrrp\_instance VI\_1 {
    state BACKUP
    interface eth0
    virtual\_router\_id 51
    priority 90
    advert\_int 1
    mcast\_src\_ip 10.4.7.12




    authentication {
        auth\_type PASS
        auth\_pass 1111
    }
    track\_script {
        chk\_nginx
    }
    virtual\_ipaddress {
        10.4.7.10
    }




}

#注:大括号后面是有空格的 否则无法识别

nopreempt 非抢占式

vip 不能随便飘(不能随便乱动。即不能在11.12上随意飘动)

安装部署controller-manager–kube-scheduler(7.21-7.22)

写启动脚本

1
vim /opt/kubernetes/server/bin/kube-controller-manager.sh
1
2
3
4
5
6
7
8
9
10
#!/bin/sh
./kube-controller-manager \\
    --cluster-cidr 172.7.0.0/16 \\
    --leader-elect true \\
    --log-dir /data/logs/kuburnetes/kube-controller-manager \\
    --master http://127.0.0.1:8080 \\
    --service-account-private-key-file ./cert/ca-key.pem \\
    --service-cluster-ip-range 192.168.0.0/16 \\
    --root-ca-file ./cert/ca.pem \\
    --v 2

创建目录

1
2
3
4
5
mkdir -p /data/logs/kubernetes/kube-controller-manager




创建supervisor启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
\[program:kube-controller-manager-7-21\]
command=/opt/kubernetes/server/bin/kube-controller-manager.sh    ;
numprocs=1                                  ;
directory=/opt/kubernetes/server/bin/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=etcd                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;

启动 supervisorctl update

创建scheduler脚本

1
vim /opt/kubernetes/server/bin/kube-scheduler.sh
1
2
3
4
5
6
#!/bin/sh
./kube-scheduler \\
    --leader-elect \\
    --log-dir /data/log/kubernetes/kube-scheduler \\
    --master http://127.0.0.1:8080 \\
    --v 2

创建supervisor启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
\[program:kube-scheduler-7-21\]
command=/opt/kubernetes/server/bin/kube-scheduler.sh    ;
numprocs=1                                  ;
directory=/opt/kubernetes/server/bin/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=etcd                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;

创建软连接查看集群状态

1
2
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl
kubectl get cs
1
2
3
4
5
6
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}  

安装部署kubelet(7.21-7.22) 签发kubelet证书(7.200)

在7.200上创建kubelet-csr.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
    "CN": "k8s-kubelet",
    "hosts": \[
        "127.0.0.1",
        "10.4.7.10",
        "10.4.7.23",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.24",
        "10.4.7.25",
        "10.4.7.26",
        "10.4.7.27",
        "10.4.7.28",
        "10.4.7.29"
    \],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": \[
        {
            "C":"CN",
            "ST":"sichuan",
            "L":"sichuan",
            "O":"xbo",
            "OU":"xbo"
        }
    \]
}

生成证书命令

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json |cfssl-json -bare kubelet

拷贝证书 kubelet.pem kubelet-key.pem

1
scp  hdss7-200:/opt/certs/kubelet.pem  hdss7-200:/opt/certs/kubelet-key.pem .

创建分发证书 注意:在conf目录下运行

(1)set-cluster

    

1
2
3
4
5
kubectl config set-cluster myk8s \\
        --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \\
        --embed-certs=true \\
        --server=https://10.4.7.10:7443 \\
        --kubeconfig=kubelet.kubeconfig

(2)set-credentials

   

1
2
3
4
5
     kubectl config set-credentials k8s-node \\
        --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \\
        --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \\
        --embed-certs=true \\
        --kubeconfig=kubelet.kubeconfig

(3)set-context

      

1
2
3
4
  kubectl config set-context myk8s-context \\
            --cluster=myk8s \\
            --user=k8s-node \\
            --kubeconfig=kubelet.kubeconfig

(4)use-context

        

1
2
kubectl config use-context myk8s-context \\
            --kubeconfig=kubelet.kubeconfig

创建k8s-node.yaml

1
vim /opt/kubernetes/server/bin/conf/k8s-node.yaml
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node

#注:创建该文件是给该node节点授权 具有运算节点权限

同理

1
kubectl create -f k8s-node.yaml

在7.22上拷贝7.21上的kublet.kubeconfig文件(也可以依据上面的命令重新创建该文件)

准备pause基础镜像(7.200)

1.拉取pause镜像(7.200)然后登陆私有仓库 上传pause

1
2
3
4
docker pull kubernetes/pause
docker tag kubernetes/pause:latest harbor.od.com/public/pause:latest
docker login harbor.od.com
docker push harbor.od.com/public/pause:latest

#注:Pause的作用 kubelet指定这个镜像 先于业务容器启动 用于给业务容器初始化网络空间 ipc空间 uts空间

在7.21上创建启动kubelet脚本

1
vim /opt/kubernetes/server/bin/kubelet.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#!/bin/sh
./kubelet --anonymous-auth=false \\
    --cgroup-driver systemd \\
    --cluster-dns 192.168.0.2 \\
    --cluster-domain cluster.local \\
    --runtime-cgroups=/systemd/system.slice \\
    --kubelet-cgroups=/systemd/system.slice \\
    --fail-swap-on="false" \\
    --client-ca-file ./cert/ca.pem \\
    --tls-cert-file ./cert/kubelet.pem \\
    --tls-private-key-file ./cert/kubelet-key.pem \\
    --hostname-override hdss7-21.host.com \\
    --image-gc-high-threshold 20 \\
    --image-gc-low-threshold 10 \\
    --kubeconfig ./conf/kubelet.kubeconfig \\
    --log-dir /data/logs/kubernetes/kube-kubelet \\
    --pod-infra-container-image harbor.od.com/public/pause:latest \\
    --root-dir /data/kubelet

#注:7.22上的  –hostname-override hdss7-21.host.com 改成 –hostname-override hdss7-22.host.com

创建以上的必要目录

1
mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet

给kubelet.sh 弄上可以执行的权限

1
chmod +x /opt/kubernetes/server/bin/kubelet.sh

创建supervisord启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
vim /etc/supervisord.d/kube-kubelet.ini




\[program:kube-kubelet-7-21\]
command=/opt/kubernetes/server/bin/kubelet.sh    ;
numprocs=1                                  ;
directory=/opt/kubernetes/server/bin/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=root                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;

创建日志目录

1
mkdir -p /data/logs/kubernetes/kube-kubelet/

启动supervisorctl 查看是否正常

1
2
3
4
5
6
supervisorctl update
kubectl get node




下面显示则正常

NAME                STATUS   ROLES    AGE    VERSION

hdss7-21.host.com   Ready       8m9s   v1.18.10

hdss7-22.host.com   Ready       88s    v1.18.10

由于以上运算节点角色为none

修改角色标签

1
2
kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=

只有这两个角色功能 仅仅显示作用 没有其他实质上的作用 作为打标签 辨识而已

安装部署kube-proxy(7.21-7.22)(连接pod网络与集群网络)

在7.200上签发证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
vim /opt/certs/kube-proxy-csr.json
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": \[
        {
            "C":"CN",
            "ST":"sichuan",
            "L":"sichuan",
            "O":"xbo",
            "OU":"xbo"
        }
    \]
}

执行命令 生成证书

1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json |cfssl-json -bare kube-proxy-client

拷贝证书(kube-proxy-client.pem kube-proxy-client-key.pem)到各运算节点(7.21-7.22),修改证书权限(600),并创建配置

1
2
3
4
5
6
scp hdss7-200:/opt/certs/kube-proxy-client.pem hdss7-200:/opt/certs/kube-proxy-client-key.pem .
chmod -R 777 \*




创建分发证书 注意:在conf目录下运行

(1)set-cluster

    

1
2
3
4
5
kubectl config set-cluster myk8s \\
        --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \\
        --embed-certs=true \\
        --server=https://10.4.7.10:7443 \\
        --kubeconfig=kube-proxy.kubeconfig

(2)set-credentials

        

1
2
3
4
5
kubectl config set-credentials kube-proxy \\
        --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \\
        --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \\
        --embed-certs=true \\
        --kubeconfig=kube-proxy.kubeconfig

(3)set-context

       

1
2
3
4
kubectl config set-context myk8s-context \\
            --cluster=myk8s \\
            --user=kube-proxy \\
            --kubeconfig=kube-proxy.kubeconfig

(4)use-context

       

1
2
kubectl config use-context myk8s-context \\
            --kubeconfig=kube-proxy.kubeconfig

#注:7-22一样

加载ipvs模块(ipvs调度流量) 然后执行

1
vim /root/ipvs.sh
1
2
3
4
5
6
7
8
9
#!/bin/bash
ipvs\_mods\_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs\_mods\_dir|grep -o "^\[^.\]\*")
do
    /sbin/modinfo -F filename $i &>/dev/null
    if \[ $? -eq 0 \];then
        /sbin/modprobe $i
    fi
done

执行 ./ipvs.sh 并查看模块情况 lsmod |grep ip_vs

#注:可以查询ipvs调度算法

创建kube-proxy启动脚本

1
vim /opt/kubernetes/server/bin/kube-proxy.sh
1
2
3
4
5
6
7
#!/bin/sh
./kube-proxy \\
    --cluster-cidr 172.7.0.0/16 \\
    --hostname-override hdss7-21.host.com \\
    --proxy-mode=ipvs \\
    --ipvs-scheduler=nq \\
    --kubeconfig ./conf/kube-proxy.kubeconfig

创建supervisord启动文件 并创建日志目录

1
vim /etc/supervisord.d/kube-proxy.ini
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
\[program:kube-proxy-7-21\]
command=/opt/kubernetes/server/bin/kube-proxy.sh    ;
numprocs=1                                  ;
directory=/opt/kubernetes/server/bin/                        ;
autostart=true                              ;
autorestart=true                            ;
startsecs=30                                ;
startretries=3                              ;
exitcodes=0,2                               ;
stopsignal=QUIT                             ;
stopwaitsecs=10                             ;
user=root                                   ;
redirect\_stderr=true                        ;
stdout\_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ;
stdout\_logfile\_maxbytes=64MB                ;
stdout\_logfile\_backups=4                    ;
stdout\_capture\_maxbytes=1MB                 ;
stdout\_events\_enabled=false                 ;
1
mkdir -p /data/logs/kubernetes/kube-proxy

检查运行状态

先安装 ipvsadm

1
yum install ipvsadm -y
1
ipvsadm -Ln

 查询当前路由代理的地址

1
2
3
4
5
6
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 10.4.7.21:6443               Masq    1      0          0         
  -> 10.4.7.22:6443               Masq    1      0          0

验证kubernetes 集群

在任意一个运算节点,创建一个资源配置清单

1
vim /root/nginx-ds.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
  

apiVersion: apps/v1 #指定api版本
kind: DaemonSet #指定创建资源的角色/类型,Deployment表示控制器
metadata: #定义资源的元数据/属性
    name: nginx-daemonset #资源的名字,在同一个namespace中必须唯一
    labels: #定义资源的标签
        app: nginx-ds
spec: #指定该资源的内容
    #replicas: 1 #指定副本数
    selector: #选择器
        matchLabels: #匹配标签
            app: nginx-ds #匹配模板名称
    template:
        metadata:
            labels:
                app: nginx
        spec: #定义容器模板
            containers: #定义容器信息
            - name: my-nginx #容器名,符合label名
              image: harbor.od.com/public/nginx:1.7.9 #容器使用的镜像以及版本
              ports:
              - containerPort: 80 # - 表示参数,容器对外开放的端口

-————————-k8s资源介绍——————————————

1
2
3
4
5
6
7
8
9
类别    名称
资源对象    PodReplicaSetReplicationControllerDeploymentStatefulSetDaemonSetJobCronJobHorizontalPodAutoscaling
配置对象    NodeNamespaceServiceSecretConfigMapIngressLabelThirdPartyResourceServiceAccount
存储对象    VolumePersistent Volume
策略对象    SecurityContextResourceQuotaLimitRange




1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: api版本
kind: 资源类型
metadata: #元数据
  name: 名字
  namespace:所在命名空间
  labels: 标签信息(可以多个)
    ##标签是key:value格式的key,value最长只能使用63个字符
    # key只能是以数字、之母、\_、-、点(.)这五类的组合,
     #value可以为空,但只能以数字及字母开头或结尾
    app: 标签内容
  annotations: #注释(不具备什么功能 就是注释 )
    zhushi: ”lalalalalalalal saddas”
spec:期望状态
  containers:容器信息(可以多个名称云镜像)
  - name: 自定义name名称
    image:镜像名
  - name:
    image:
  nodeSelector:#节点选择器(如给指定运行在disk为ssd的node上)
    disk: ssd
  imagePullPolicy:#是否使用本地或远端的下载镜像
    #1Always
    #2Never
    #3IfNotPresent
  livenessProbe:#存活性探针
    #1、exec #命令
    #2、httpGet #http请求 指定ip:port
    #3、tcpSocket  #
  readinessProbe:#就绪状态探针
    #1、exec #命令
    #2、httpGet #http请求 指定ip:port
    #3、tcpSocket  #

验证kubernetes 集群完毕

#注

kubectl命令 需要的命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
kubectl get cs 查看控制器以及存储是否正常运行
kubectl get node  查看机器节点是否正常
kubectl get pod 查看运行的节点是否正常启动
kubectl get namespace(ns) 查询命名空间(可以用namespace或者是ns)
kubectl get all -n default 查询default命名空间下所有的资源(可以用-n default  也可以不用  默认是default
kubectl create namespance(ns) app 创建 命名空间
kubectl delete namespace(ns) app 删除命名空间
kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:1.7.9 -n kube-public 在kube-public命名空间 下创建一个deloyment资源pod 控制器资源 资源的镜像是 harbor.od.com/public/nginx:1.7.9
kubectl get pods -n kube-public -o wide 以扩展的方式查询命名空间为kube-public下的pod




kubectl describe deployment nginx-dp -n kube-public  详细查看资源
kubectl exec -it \[名称\] bash (该命令和docker 一样  作用可以跨主机进入容器)
kubectl delete pods \[名称\] -n nginx-dp  删除pod资源(重启资源的一种方式 强制删除参数 --force --grace-period=0)
kubectl delete deployment \[名称\] -n nginx-dp 删除pod 控制器 删除以后 pod 也会删除




kubectl expose deployment nginx-dp --port=80 -n kube-public 添加对外服务端口 将服务暴露出来
kubectl scale deployment nginx-dp --replicas=2 -n kube-public 扩展副本集
kubectl describe svc nginx-dp -n kube-public 查看服务




kubectl get pods \[pod名称\] -o yaml -n kube-public 查询pod资源配置清单
kubectl explain service.metadata  解释资源
kubectl edit svc nginx-ds 在线修改服务资源

资源类型 daemonset 就是有多少节点就运行多少副本 该资源类型用于验证

查看文档 http://docs.kubernetes.org.cn/

证书过期了

重新生成证书意外 也要替换 kubeconfig这个文件 因为该文件依赖证书