目 录CONTENT

文章目录

Kubernetes二进制部署

Administrator
2024-10-10 / 0 评论 / 0 点赞 / 2 阅读 / 45386 字

Kubernetes二进制部署

部署列表

1、部署Etcd数据库集群
2、在Node节点安装Docker
3、部署Flannel网络插件
4、在Master节点部署组件(api-server,schduler,controller-manager)
5、在Node节点部署组件(kubelet,kube-proxy)
6、查看集群状态
7、运行一个测试示例
8、完成

准备环境

修改主机名称

hostnamectl set-hostname k8s-master
​
hostnamectl set-hostname k8s-node1
​
hostnamectl set-hostname k8s-node2

查看并重启主机名是否修改成功

systemctl restart systemd-hostnamed
​
hostname

三台机器,所有机器相互做解析 关闭防火墙和selinux

vim  /etc/hosts
​
192.168.118.10  k8s-master
192.168.118.11  k8s-node1
192.168.118.12  k8s-node2

(注:机器的ip设置为静态ip)

关闭防火墙和slinux

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux

部署etcd集群

使用cfssl来生成自签证书,任何机器都行,证书这块儿知道怎么生成、怎么用即可,暂且不用过多研究(这个证书随便在那台机器生成都可以。哪里用将证书拷贝到哪里就可以了。)

这里我是用的master节点去生成的,用以下命令

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
​
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
​
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
​
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
​
mv cfssl_linux-amd64 /usr/local/bin/cfssl
​
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
​
mv cfssl-certinfo_linux-amd64  /usr/bin/cfssl-certinfo

生成Etcd证书

先创建以下三个文件

mkdir cert
​
cd cert/
​
vim ca-config.json  #生成ca中心的
​
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
vim ca-csr.json  #生成ca中心的证书请求文件
​
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
vim server-csr.json #生成服务器的证书请求文件(三个ip分别为不同节点的ip)

{
    "CN": "etcd",
    "hosts": [
    "192.168.118.10",
    "192.168.118.11",
    "192.168.118.12"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

生成证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

查看是否有以下证书

ls  *pem

安装etcd

二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12 (根据所需的版本进行下载)

以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的(以下步骤三台机器都操作)

wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar zxvf etcd-v3.2.12-linux-amd64.tar.gz

mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

创建etcd配置文件(后面注释删除并且不能有空格)

vim  /opt/etcd/cfg/etcd 

#[Member]
ETCD_NAME="etcd01"   #节点名称,各个节点不能相同
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.118.10:2380"   #写每个节点的ip
ETCD_LISTEN_CLIENT_URLS="https://192.168.118.10:2379" #写每个节点的ip

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.118.10:2380" #写每个节点的ip
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.118.10:2379"  #写每个节点的ip
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.118.10:2380,etcd02=https://192.168.118.11:2380,etcd03=https://192.168.118.12:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

参数解释:
* ETCD_NAME 节点名称,每个节点名称不一样
* ETCD_DATA_DIR 存储数据目录(他是一个数据库,不是存在内存的,存在硬盘中的,所有和k8s有关的信息都会存到etcd里面的)
* ETCD_LISTEN_PEER_URLS 集群通信监听地址
* ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
* ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
* ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
* ETCD_INITIAL_CLUSTER 集群节点地址
* ETCD_INITIAL_CLUSTER_TOKEN 集群Token
* ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

剩下俩个机器一样的操作只需修改节点名称和ip

配置etcd的启动文件

(三个节点都要配置)

vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

再把刚才生成的证书拷贝到配置文件中的位置:(将master上面生成的证书scp到剩余两台机器上面)

cd /root/cert/

cp ca*pem server*pem /opt/etcd/ssl

再用scp方式将证书拷贝至另外两个节点
scp ca*pem server*pem k8s-node1:/opt/etcd/ssl
scp ca*pem server*pem k8s-node2:/opt/etcd/ssl

全部启动并设置开启启动

systemctl daemon-reload

systemctl start etcd(三台要一起启动etcd)

systemctl enable etcd

systemctl status etcd

启动完成后检查启动的状态是否正常,如果不正常去看看前面的配置是有空格还是ip修改错误了。

检查etcd集群状态

在master节点查看

/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.118.10:2379,https://192.168.118.11:2379,https://192.168.118.12:2379" cluster-health

如果输出以下则说明etcd集群正常

在node节点安装docker

只需在node节点配置

cd /etc/yum.repos.d/

wget   http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce

部署Flannel网络插件

flannel作用

Flannel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段: 在node节点部署,如果没有在master部署应用,那就不要在master部署flannel,他是用来给所有的容器用来通信的。

安装flannel

在master节点执行以下(记得修改ip)

cd cret/

再执行以下
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.118.10:2379,https://192.168.118.11:2379,https://192.168.118.12:2379" set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

以下部署步骤只在node节点进行操作

下载二进制包

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

tar zxvf flannel-v0.10.0-linux-amd64.tar.gz

解压后会生成两个启动命令和脚本

创建启动命令的目录

mkdir -pv /opt/kubernetes/bin

mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

配置Flannel

mkdir -p /opt/kubernetes/cfg/

vim /opt/kubernetes/cfg/flanneld
(ip需要修改为每个节点的ip)

FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.118.10:2379,https://192.168.118.11:2379,https://192.168.118.12:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

配置flannel启动文件

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

配置docker启动文件

将原有的docker启动文件内容删除更换以下

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

重启flannel和docker

systemctl daemon-reload

systemctl start flanneld

systemctl enable flanneld docker

systemctl restart docker

检查flannel是否生效

ps -ef | grep docker

ip  a  

1、查看docker0与flannel.1在同一网段

2、测试不同节点互通,在当前节点访问另一个node节点docker0 IP

这里ping的是另一个node节点上docker0的ip

在master节点部署组件

在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。

生成证书

master节点操作--给api-server创建的证书

mkdir -p /opt/crt/

cd /opt/crt/

vim ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}

vim ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

运行一下
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成apiserver证书

vim server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",         //这是后面dns要使用的虚拟网络的网关,不用改,就用这个切忌
      "127.0.0.1",
      "192.168.118.10",    // master的IP地址。
      "192.168.118.11",   //为俩个node节点ip
      "192.168.118.12",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

运行命令
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

生成kube-proxy证书

vim kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

运行命令
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

最终生成以下证书文件

ls  *pem

部署apiserver组件

以下在master节点进行

下载这个包到root下

wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz

创建所需目录下的文件
mkdir /opt/kubernetes/{bin,cfg,ssl} -pv

解压下载的包
tar zxvf kubernetes-server-linux-amd64.tar.gz

切换到解压包后的启动文件目录下
cd kubernetes/server/bin

将需要启动文件拷贝到指定目录
cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin


再切换到生成证书的目录下
cd /opt/crt/

将所需的证书文件拷贝到指定目录
cp server.pem server-key.pem ca.pem ca-key.pem /opt/kubernetes/ssl/

创建token文件

cd /opt/kubernetes/cfg/

vim token.csv

674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组

创建apiserver配置文件

vim kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.118.10:2379,https://192.168.118.11:2379,https://192.168.118.12:2379 \  #修改为三个节点ip
--bind-address=192.168.118.10 \  #master的ip地址,就是安装api-server的机器地址
--secure-port=6443 \
--advertise-address=192.168.118.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \    #这里就用这个网段切记不要修改
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"


参数说明:
* --logtostderr 启用日志
* --v 日志等级
* --etcd-servers etcd集群地址
* --bind-address 监听地址
* --secure-port https安全端口
* --advertise-address 集群通告地址
* --allow-privileged 启用授权
* --service-cluster-ip-range Service虚拟IP地址段
* --enable-admission-plugins 准入控制模块
* --authorization-mode 认证授权,启用RBAC授权和节点自管理
* --enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
* --token-auth-file token文件
* --service-node-port-range Service Node类型默认分配端口范围

创建kube-apiserver启动文件

cd /usr/lib/systemd/system

vim kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动kube-apiserver

systemctl daemon-reload

systemctl enable kube-apiserver

systemctl start kube-apiserver

systemctl status kube-apiserver

部署schduler组件

创建schduler配置文件

vim  /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

创建kube-schduler启动文件

cd /usr/lib/systemd/system/

vim kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动kube-schduler

systemctl daemon-reload

systemctl enable kube-scheduler 

systemctl start kube-scheduler

systemctl status kube-scheduler

部署controller-manager组件

创建controller-manager配置文件

cd /opt/kubernetes/cfg/

vim kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \    //这是后面dns要使用的虚拟网络,不用改,就用这个  切忌
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

创建kube-controller-manager启动文件

cd /usr/lib/systemd/system/

vim kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

启动kube-controller-manager

systemctl daemon-reload

systemctl enable kube-controller-manager

systemctl start kube-controller-manager

systemctl status kube-controller-manager.service

检查集群状态

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态

/opt/kubernetes/bin/kubectl get cs

如上输出说明所有组件都正常

将kubelet-bootstrap用户绑定到系统集群角色

给启动命令制作软连接(master节点操作)

ln  -s  /opt/kubernetes/bin/kubectl    /usr/bin/kubectl
没有做软连接
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
做软连接
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

回显如下(也可用echo进行输出验证是否有错)

创建kubeconfig文件

在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件

cd  /opt/crt/

指定apiserver内网负载均衡地址

KUBE_APISERVER="https://192.168.118.10:6443"  #写你master的ip地址,集群中就写负载均衡的ip地址

进行验证是否配置成功

echo $KUBE_APISERVER

查看token值

token值在我们创建的token文件中

cat  /opt/kubernetes/cfg/token.csv

设置token值变量

BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc

进行验证是否配置成功

echo  $BOOTSTRAP_TOKEN

设置集群参数

/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

设置客户端认证参数

/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

设置上下文参数

/opt/kubernetes/bin/kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

设置默认上下文

/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

创建kube-proxy kubeconfig文件

/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

设置证书、私钥

/opt/kubernetes/bin/kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

设置集群、用户

/opt/kubernetes/bin/kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

设置上下文

/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

查看创建的俩个文件

ls *.kubeconfig

将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下

scp *.kubeconfig k8s-node1:/opt/kubernetes/cfg/

scp *.kubeconfig k8s-node2:/opt/kubernetes/cfg/

检查是否已拷贝

在node节点部署组件

部署kubelet组件

拷贝启动命令到node节点

将master上面解压出来的启动命令cp到俩个node节点

cd kubernetes/server/bin/

scp kubelet kube-proxy k8s-node1:/opt/kubernetes/bin/

scp kubelet kube-proxy k8s-node2:/opt/kubernetes/bin/

查看是否已cp到node节点

在两个node节点创建kubelet配置文件

vim /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.118.11 \   #每个node节点自己的ip地址
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"  #这个镜像需要提前下载

参数说明:
* --hostname-override 在集群中显示的主机名
* --kubeconfig 指定kubeconfig文件位置,会自动生成
* --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
* --cert-dir 颁发证书存放位置
* --pod-infra-container-image 管理Pod网络的镜像

scp /opt/kubernetes/cfg/kubelet  k8s-node2:/opt/kubernetes/cfg/kubelet
(将这个配置发给node2节点,注意修改ip)

提前拉取镜像

docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
俩个node节点都需要执行

在两个node节点创建kubelet.config配置文件

vim /opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.118.11   #写你机器的ip地址
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]      #不要改,就是这个ip地址
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
  webhook:
    enabled: false

scp/opt/kubernetes/cfg/kubelet.config k8s-node2:/opt/kubernetes/cfg/kubelet.config
(将这个配置文件发给node2节点,注意修改ip)

拷贝两个节点证书

在两个node节点创建证书存放目录

mkdir -p /opt/kubernetes/ssl

再在master节点,将这几个证书拷贝到两个node节点去

ls  /opt/crt

拷贝到node节点

scp ca*pem server*pem kube-proxy*pem  k8s-node1:/opt/kubernetes/ssl/

scp ca*pem server*pem kube-proxy*pem  k8s-node2:/opt/kubernetes/ssl/

检查是否拷贝完成

ls  /opt/kubernetes/ssl

创建kubelet启动文件

vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
scp /usr/lib/systemd/system/kubelet.service k8s-node2:/usr/lib/systemd/system/kubelet.service
拷贝至node2节点

启动kubelet

systemctl daemon-reload

systemctl enable kubelet

systemctl start kubelet

在启动后会去访问master节点的apiserver,用以下命令在master节点查看是否接收到两个node节点的请求

kubectl get csr

上图可看到已收到node两个节点的请求

在Master审批Node加入集群

启动后还没加入到集群中,需要手动允许该节点才可以。在Master节点查看请求签名的Node

可以看到后面的状态为等待审批node节点请求"Pending"

注意:xxxid 指的是上面的NAME这一列

kubectl certificate approve xxxid

kubectl certificate approve xxxid
kubectl get csr

可以看到状态为审批通过"Approved,Issued"

查看集群节点信息

kubectl get node

可以看到两个node节点已成功加入集群

部署kube-proxy组件

配置kube-proxy配置文件

在两个node节点部署

vim /opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.118.11 \   #写每个node节点ip
--cluster-cidr=10.0.0.0/24 \           //不要改,就是这个ip
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

scp /opt/kubernetes/cfg/kube-proxy k8s-node2:/opt/kubernetes/cfg/kube-proxy
(注意:修改ip)

创建kube-proxy启动文件

vim  /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target

scp /usr/lib/systemd/system/kube-proxy.service  k8s-node2:/usr/lib/systemd/system/kube-proxy.service

启动kube-proxy

systemctl daemon-reload

systemctl enable kube-proxy

systemctl start kube-proxy

在master查看集群状态

kubectl get node

kubectl get cs

到这一步如果没有什么问题k8s二进制方式就完成了!

以下是页面的部署,感兴趣可以搭建一下

部署Daschboard (Web UI)

配置需求

dashboard-deployment.yaml    #部署Pod,提供Web服务

dashboard-rbac.yaml     #授权访问apiserver获取信息

dashboard-service.yaml      #发布服务,提供对外访问

创建目录

mkdir  webui  (master节点配置)

cd  webui

创建yaml文件

创建dashboard-deployment.yaml

vim dashboard-deployment.yaml 

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kubernetes-dashboard
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/kube_containers/kubernetes-dashboard-amd64:v1.8.1
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 9090
          protocol: TCP
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"

创建dashboard-rbac.yaml

vim  dashboard-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
  name: kubernetes-dashboard
  namespace: kube-system
---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

创建dashboard-service.yaml

vim  dashboard-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
ls  /root/webui

引用这些yaml文件

kubectl create -f dashboard-rbac.yaml

kubectl create -f dashboard-deployment.yaml

kubectl create -f dashboard-service.yaml

查看命名空间

kubectl get all -n kube-system

查看指定命名空间的服务

kubectl get svc -n kube-system

访问node节点的ip+端口(两个node节点都可以进行访问)

测试

测试页面k8s及k8s页面是否可正常使用

运行一个测试示例--在master节点先安装docker服务 创建一个Nginx Web,判断集群是否正常

kubectl run nginx --image=daocloud.io/nginx --replicas=3

kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort

在master节点查看pod,service

kubectl get pods

查看pod详细信息

kubectl describe pod nginx-6648ff9bb4-459wb

查看创建的service

kubectl get svc

访问nodeip+上图的端口,是否可以正常访问到nginx默认页面

再次访问webui的页面

可以看到我们刚刚测试的时候创建的nginx

0

评论区